Nov 28 06:47:56 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 06:47:56 crc restorecon[4753]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:47:56 crc restorecon[4753]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 06:47:57 crc kubenswrapper[4889]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:47:57 crc kubenswrapper[4889]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 06:47:57 crc kubenswrapper[4889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:47:57 crc kubenswrapper[4889]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:47:57 crc kubenswrapper[4889]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 06:47:57 crc kubenswrapper[4889]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.161014 4889 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166481 4889 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166515 4889 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166524 4889 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166533 4889 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166542 4889 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166550 4889 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166558 4889 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166564 4889 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166572 4889 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166579 4889 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166584 4889 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166590 4889 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166597 4889 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166605 4889 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166612 4889 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166618 4889 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166624 4889 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166630 4889 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166636 4889 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166642 4889 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166648 4889 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166653 4889 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166659 4889 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166665 4889 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166672 4889 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166678 4889 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166683 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166688 4889 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166694 4889 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166701 4889 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166739 4889 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166746 4889 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166754 4889 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166760 4889 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166766 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166771 4889 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166778 4889 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166785 4889 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166791 4889 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166824 4889 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166830 4889 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166836 4889 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166841 4889 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166846 4889 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166852 4889 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166858 4889 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166864 4889 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166869 4889 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166874 4889 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166880 4889 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166885 4889 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166892 4889 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166898 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166905 4889 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166911 4889 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166916 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166924 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166933 4889 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166939 4889 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166946 4889 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166952 4889 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166959 4889 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166966 4889 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166973 4889 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166980 4889 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166987 4889 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.166994 4889 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.167001 4889 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.167007 4889 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.167013 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.167019 4889 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167417 4889 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167435 4889 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167447 4889 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167456 4889 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167466 4889 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167473 4889 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167484 4889 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167495 4889 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167504 4889 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167512 4889 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167520 4889 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167529 4889 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167540 4889 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167548 4889 flags.go:64] FLAG: --cgroup-root="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167556 4889 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167563 4889 flags.go:64] FLAG: --client-ca-file="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167571 4889 flags.go:64] FLAG: --cloud-config="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167579 4889 flags.go:64] FLAG: --cloud-provider="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167587 4889 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167608 4889 flags.go:64] FLAG: --cluster-domain="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167616 4889 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167624 4889 flags.go:64] FLAG: --config-dir="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167631 4889 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167638 4889 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167654 4889 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167660 4889 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167667 4889 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167674 4889 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167680 4889 flags.go:64] FLAG: --contention-profiling="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167686 4889 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167693 4889 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167700 4889 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167730 4889 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167739 4889 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167746 4889 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167752 4889 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167758 4889 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167765 4889 flags.go:64] FLAG: --enable-server="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167771 4889 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167780 4889 flags.go:64] FLAG: --event-burst="100" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167786 4889 flags.go:64] FLAG: --event-qps="50" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167792 4889 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167799 4889 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167806 4889 flags.go:64] FLAG: --eviction-hard="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167815 4889 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167821 4889 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167828 4889 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167834 4889 flags.go:64] FLAG: --eviction-soft="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167843 4889 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167850 4889 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167857 4889 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167863 4889 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167870 4889 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167876 4889 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167882 4889 flags.go:64] FLAG: --feature-gates="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167897 4889 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167903 4889 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167910 4889 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167917 4889 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167924 4889 flags.go:64] FLAG: --healthz-port="10248" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167930 4889 flags.go:64] FLAG: --help="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167936 4889 flags.go:64] FLAG: --hostname-override="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167943 4889 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167949 4889 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167956 4889 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167962 4889 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167968 4889 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167974 4889 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167981 4889 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167987 4889 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.167994 4889 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168000 4889 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168007 4889 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168013 4889 flags.go:64] FLAG: --kube-reserved="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168020 4889 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168026 4889 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168033 4889 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168039 4889 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168046 4889 flags.go:64] FLAG: --lock-file="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168053 4889 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168059 4889 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168067 4889 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168077 4889 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168084 4889 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168090 4889 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168096 4889 flags.go:64] FLAG: --logging-format="text" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168103 4889 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168111 4889 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168118 4889 flags.go:64] FLAG: --manifest-url="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168125 4889 flags.go:64] FLAG: --manifest-url-header="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168134 4889 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168142 4889 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168150 4889 flags.go:64] FLAG: --max-pods="110" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168157 4889 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168163 4889 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168170 4889 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168176 4889 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168182 4889 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168189 4889 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168195 4889 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168211 4889 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168220 4889 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168229 4889 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168236 4889 flags.go:64] FLAG: --pod-cidr="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168244 4889 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168256 4889 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168264 4889 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168272 4889 flags.go:64] FLAG: --pods-per-core="0" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168281 4889 flags.go:64] FLAG: --port="10250" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168288 4889 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168297 4889 flags.go:64] FLAG: --provider-id="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168305 4889 flags.go:64] FLAG: --qos-reserved="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168312 4889 flags.go:64] FLAG: --read-only-port="10255" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168320 4889 flags.go:64] FLAG: --register-node="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168329 4889 flags.go:64] FLAG: --register-schedulable="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168339 4889 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168353 4889 flags.go:64] FLAG: --registry-burst="10" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168360 4889 flags.go:64] FLAG: --registry-qps="5" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168368 4889 flags.go:64] FLAG: --reserved-cpus="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168377 4889 flags.go:64] FLAG: --reserved-memory="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168388 4889 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168396 4889 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168404 4889 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168412 4889 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168419 4889 flags.go:64] FLAG: --runonce="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168427 4889 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168435 4889 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168443 4889 flags.go:64] FLAG: --seccomp-default="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168451 4889 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168458 4889 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168467 4889 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168475 4889 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168483 4889 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168491 4889 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168498 4889 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168506 4889 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168514 4889 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168521 4889 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168529 4889 flags.go:64] FLAG: --system-cgroups="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168537 4889 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168551 4889 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168559 4889 flags.go:64] FLAG: --tls-cert-file="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168567 4889 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168577 4889 flags.go:64] FLAG: --tls-min-version="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168586 4889 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168594 4889 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168602 4889 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168611 4889 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168619 4889 flags.go:64] FLAG: --v="2" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168629 4889 flags.go:64] FLAG: --version="false" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168639 4889 flags.go:64] FLAG: --vmodule="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168651 4889 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.168659 4889 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168867 4889 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168880 4889 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168889 4889 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168896 4889 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168903 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168909 4889 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168915 4889 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168923 4889 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168929 4889 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168936 4889 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168943 4889 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168950 4889 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168957 4889 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168964 4889 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168970 4889 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168976 4889 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168982 4889 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168988 4889 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.168994 4889 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169000 4889 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169006 4889 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169012 4889 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169018 4889 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169024 4889 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169031 4889 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169039 4889 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169047 4889 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169053 4889 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169060 4889 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169067 4889 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169074 4889 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169080 4889 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169086 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169092 4889 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169100 4889 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169107 4889 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169113 4889 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169122 4889 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169130 4889 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169137 4889 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169145 4889 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169154 4889 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169162 4889 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169169 4889 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169176 4889 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169183 4889 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169190 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169196 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169213 4889 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169219 4889 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169226 4889 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169233 4889 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169239 4889 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169246 4889 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169252 4889 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169264 4889 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169271 4889 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169277 4889 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169283 4889 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169290 4889 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169296 4889 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169304 4889 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169312 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169318 4889 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169325 4889 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169331 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169338 4889 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169344 4889 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169351 4889 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169357 4889 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.169365 4889 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.169389 4889 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.179685 4889 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.179733 4889 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179805 4889 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179813 4889 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179817 4889 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179822 4889 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179826 4889 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179830 4889 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179834 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179837 4889 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179842 4889 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179846 4889 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179852 4889 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179856 4889 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179861 4889 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179867 4889 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179872 4889 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179877 4889 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179882 4889 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179887 4889 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179892 4889 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179898 4889 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179902 4889 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179907 4889 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179912 4889 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179916 4889 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179921 4889 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179927 4889 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179931 4889 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179935 4889 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179939 4889 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179943 4889 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179946 4889 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179950 4889 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179954 4889 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179958 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179962 4889 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179966 4889 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179970 4889 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179974 4889 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179978 4889 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179982 4889 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179986 4889 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179989 4889 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179993 4889 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.179997 4889 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180001 4889 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180005 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180010 4889 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180014 4889 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180018 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180022 4889 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180027 4889 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180031 4889 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180035 4889 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180039 4889 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180042 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180046 4889 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180050 4889 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180054 4889 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180057 4889 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180061 4889 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180064 4889 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180069 4889 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180072 4889 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180076 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180079 4889 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180083 4889 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180087 4889 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180090 4889 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180094 4889 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180098 4889 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180102 4889 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.180109 4889 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180257 4889 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180266 4889 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180270 4889 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180275 4889 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180279 4889 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180283 4889 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180288 4889 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180293 4889 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180300 4889 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180304 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180309 4889 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180313 4889 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180318 4889 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180323 4889 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180327 4889 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180330 4889 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180334 4889 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180338 4889 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180342 4889 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180345 4889 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180351 4889 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180354 4889 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180358 4889 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180361 4889 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180365 4889 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180369 4889 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180373 4889 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180377 4889 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180381 4889 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180385 4889 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180390 4889 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180395 4889 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180399 4889 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180403 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180406 4889 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180410 4889 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180414 4889 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180418 4889 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180423 4889 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180429 4889 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180434 4889 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180439 4889 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180443 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180448 4889 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180451 4889 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180455 4889 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180459 4889 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180463 4889 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180466 4889 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180470 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180474 4889 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180478 4889 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180483 4889 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180487 4889 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180490 4889 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180494 4889 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180498 4889 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180502 4889 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180505 4889 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180509 4889 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180513 4889 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180518 4889 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180521 4889 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180525 4889 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180529 4889 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180533 4889 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180537 4889 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180541 4889 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180544 4889 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180548 4889 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.180552 4889 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.180559 4889 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.180916 4889 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.183666 4889 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.183765 4889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.184293 4889 server.go:997] "Starting client certificate rotation" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.184317 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.184672 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 01:37:40.8508084 +0000 UTC Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.184859 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.190027 4889 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.192276 4889 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.193023 4889 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.203255 4889 log.go:25] "Validated CRI v1 runtime API" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.221447 4889 log.go:25] "Validated CRI v1 image API" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.223631 4889 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.226638 4889 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-06-42-33-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.226754 4889 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.248966 4889 manager.go:217] Machine: {Timestamp:2025-11-28 06:47:57.246324396 +0000 UTC m=+0.216558601 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c2965de2-18dd-4931-940c-3947028e6c9f BootID:980f1d8a-b8dc-483a-92cf-447ce2d2f4e8 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:79:70:59 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:79:70:59 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f3:ca:fb Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:45:70:2f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:36:dd:94 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6c:a3:98 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:73:fd:9b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:aa:2f:5d:4a:3e:e4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:5a:5c:43:15:e3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.249390 4889 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.249598 4889 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.250547 4889 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.250926 4889 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.250990 4889 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.251362 4889 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.251608 4889 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.251890 4889 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.251959 4889 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.252225 4889 state_mem.go:36] "Initialized new in-memory state store" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.252851 4889 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.253783 4889 kubelet.go:418] "Attempting to sync node with API server" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.253817 4889 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.253854 4889 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.253881 4889 kubelet.go:324] "Adding apiserver pod source" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.253902 4889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.255985 4889 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.256211 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.256361 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.256464 4889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.256547 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.256736 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.257401 4889 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258195 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258259 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258283 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258297 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258318 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258333 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258346 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258366 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258380 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258399 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258446 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.258470 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.259060 4889 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.259861 4889 server.go:1280] "Started kubelet" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.260266 4889 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 06:47:57 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.262137 4889 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.262457 4889 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.265742 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.265831 4889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.266618 4889 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.266987 4889 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.267018 4889 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.266992 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:31:12.236351207 +0000 UTC Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.267057 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 295h43m14.969297827s for next certificate rotation Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.267024 4889 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.267194 4889 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.268240 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.262985 4889 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c18d9afdbfc1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:47:57.259791387 +0000 UTC m=+0.230025582,LastTimestamp:2025-11-28 06:47:57.259791387 +0000 UTC m=+0.230025582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.268762 4889 factory.go:55] Registering systemd factory Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.268822 4889 factory.go:221] Registration of the systemd container factory successfully Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.269529 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.269601 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.272571 4889 server.go:460] "Adding debug handlers to kubelet server" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.273214 4889 factory.go:153] Registering CRI-O factory Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.273242 4889 factory.go:221] Registration of the crio container factory successfully Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.273343 4889 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.273373 4889 factory.go:103] Registering Raw factory Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.273397 4889 manager.go:1196] Started watching for new ooms in manager Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.278454 4889 manager.go:319] Starting recovery of all containers Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287081 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287159 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287182 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287201 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287221 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287241 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287262 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287283 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287309 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287328 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287349 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287368 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287386 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287409 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287453 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287500 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287519 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287538 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287557 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287575 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287613 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287633 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.287653 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.288649 4889 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.288751 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.288837 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.288879 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.288913 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.288943 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.288969 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289033 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289093 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289187 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289218 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289244 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289271 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289297 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289322 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289348 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289405 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289431 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289455 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289486 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289511 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289538 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289562 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289591 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289618 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289642 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289667 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289690 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289749 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289779 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289896 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.289941 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290032 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290066 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290129 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290157 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290183 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290208 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290233 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290278 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290347 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290375 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290399 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290425 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290449 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290477 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290510 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290542 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290597 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290629 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290655 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290736 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290767 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290794 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290820 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290902 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290954 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.290981 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291008 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291035 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291060 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291089 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291113 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291140 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291216 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291242 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291349 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291376 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291402 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291435 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291507 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291541 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291635 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291661 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291691 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291745 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291775 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291800 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291836 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291857 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291911 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.291931 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292075 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292163 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292185 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292206 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292233 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292254 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292347 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292379 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292405 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292426 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292446 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292464 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.292490 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295089 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295161 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295214 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295249 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295270 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295305 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295324 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295347 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295367 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295404 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295425 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295471 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295493 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295513 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295539 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295571 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295591 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295612 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295638 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295660 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295690 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295769 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295797 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295823 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295844 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295863 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295889 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295914 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295936 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295956 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.295978 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296004 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296030 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296050 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296069 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296089 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296115 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296143 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296165 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296184 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296207 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296232 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296261 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296302 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296323 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296345 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296365 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296384 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296403 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296422 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296444 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296464 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296484 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296503 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296524 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296544 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296571 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296610 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296634 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296654 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296680 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296698 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296753 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296781 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296807 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296843 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296873 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296903 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296928 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296973 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.296992 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297018 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297037 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297056 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297089 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297117 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297137 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297236 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297256 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297276 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297302 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297321 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297342 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297362 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297385 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297404 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297424 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297443 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297463 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297483 4889 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297501 4889 reconstruct.go:97] "Volume reconstruction finished" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.297515 4889 reconciler.go:26] "Reconciler: start to sync state" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.315447 4889 manager.go:324] Recovery completed Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.328917 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.328973 4889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.330378 4889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.330415 4889 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.330456 4889 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.330515 4889 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.330821 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.330893 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.330907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.331895 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.332048 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.332215 4889 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.332290 4889 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.332355 4889 state_mem.go:36] "Initialized new in-memory state store" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.341396 4889 policy_none.go:49] "None policy: Start" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.342829 4889 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.342909 4889 state_mem.go:35] "Initializing new in-memory state store" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.368213 4889 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.419131 4889 manager.go:334] "Starting Device Plugin manager" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.419185 4889 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.419201 4889 server.go:79] "Starting device plugin registration server" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.419678 4889 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.419696 4889 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.420196 4889 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.420476 4889 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.420499 4889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.426339 4889 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.431594 4889 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.431747 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.432992 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.433035 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.433048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.433258 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.433929 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.434010 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.434131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.434160 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.434171 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.434289 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.434557 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.434635 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435114 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435149 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435160 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435228 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435247 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435476 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435610 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435638 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435647 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435782 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.435825 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.436379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.436421 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.436435 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.436567 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.436685 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.436729 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437225 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437272 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437829 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437844 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437858 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437867 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437872 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.437879 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.438055 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.438081 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.439001 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.439085 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.439142 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.469213 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.499896 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.499942 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.499972 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.499991 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500013 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500030 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500049 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500066 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500087 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500107 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500126 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500144 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500162 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500198 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.500214 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.519959 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.521802 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.521874 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.521889 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.521929 4889 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.522791 4889 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601329 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601410 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601433 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601472 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601489 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601506 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601538 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601555 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601572 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601587 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601617 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601633 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601630 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601723 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601649 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601804 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601863 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601868 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601888 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601879 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601937 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601783 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601974 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601939 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601985 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.601906 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.602028 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.602019 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.602024 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.602013 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.723453 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.724856 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.724898 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.724910 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.724939 4889 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.725740 4889 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.768533 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.775601 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.796663 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.803745 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-1be9627f2c351defd0b75b335761e832e83c02a470ab358351c76dead60eba7b WatchSource:0}: Error finding container 1be9627f2c351defd0b75b335761e832e83c02a470ab358351c76dead60eba7b: Status 404 returned error can't find the container with id 1be9627f2c351defd0b75b335761e832e83c02a470ab358351c76dead60eba7b Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.805770 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-eca9e07a7afcb94983ab3479664761a1edd9fdb5732e63d738f74afc97a3227e WatchSource:0}: Error finding container eca9e07a7afcb94983ab3479664761a1edd9fdb5732e63d738f74afc97a3227e: Status 404 returned error can't find the container with id eca9e07a7afcb94983ab3479664761a1edd9fdb5732e63d738f74afc97a3227e Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.812702 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.816132 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8cfbdc101a49429663a78f562b14d0537c24775efd382cf7689b643efe3f6ace WatchSource:0}: Error finding container 8cfbdc101a49429663a78f562b14d0537c24775efd382cf7689b643efe3f6ace: Status 404 returned error can't find the container with id 8cfbdc101a49429663a78f562b14d0537c24775efd382cf7689b643efe3f6ace Nov 28 06:47:57 crc kubenswrapper[4889]: I1128 06:47:57.817354 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.833336 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-bc158561d5e0436d9991b939de8111ee577b650520657fea531f2fbde5a6423a WatchSource:0}: Error finding container bc158561d5e0436d9991b939de8111ee577b650520657fea531f2fbde5a6423a: Status 404 returned error can't find the container with id bc158561d5e0436d9991b939de8111ee577b650520657fea531f2fbde5a6423a Nov 28 06:47:57 crc kubenswrapper[4889]: W1128 06:47:57.846576 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-eff675c96243f8ad09052ba68a83b4721d936c93e0d0a473ffd16b939d7dba5b WatchSource:0}: Error finding container eff675c96243f8ad09052ba68a83b4721d936c93e0d0a473ffd16b939d7dba5b: Status 404 returned error can't find the container with id eff675c96243f8ad09052ba68a83b4721d936c93e0d0a473ffd16b939d7dba5b Nov 28 06:47:57 crc kubenswrapper[4889]: E1128 06:47:57.870613 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.126549 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.127978 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.128031 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.128046 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.128092 4889 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:47:58 crc kubenswrapper[4889]: E1128 06:47:58.128728 4889 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.263887 4889 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:58 crc kubenswrapper[4889]: W1128 06:47:58.274904 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:58 crc kubenswrapper[4889]: E1128 06:47:58.275010 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.337799 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145" exitCode=0 Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.337936 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.338250 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eca9e07a7afcb94983ab3479664761a1edd9fdb5732e63d738f74afc97a3227e"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.338527 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.340463 4889 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f5fa1deac47de89310413793e9a4b6879c2fe9ce02252a0ae5d4039e97682e14" exitCode=0 Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.340538 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.340520 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f5fa1deac47de89310413793e9a4b6879c2fe9ce02252a0ae5d4039e97682e14"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.340570 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.340599 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1be9627f2c351defd0b75b335761e832e83c02a470ab358351c76dead60eba7b"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.340604 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.340801 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.341797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.341826 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.341838 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.343209 4889 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="54a95e4475ad2a22059540b3d681c06323b8b2dc0d26c12a2d96ca6ad1db039d" exitCode=0 Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.343260 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"54a95e4475ad2a22059540b3d681c06323b8b2dc0d26c12a2d96ca6ad1db039d"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.343226 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.343352 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"eff675c96243f8ad09052ba68a83b4721d936c93e0d0a473ffd16b939d7dba5b"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.343536 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.344796 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.344826 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.344835 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.345422 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.345479 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.345506 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.347101 4889 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3" exitCode=0 Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.347177 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.347203 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bc158561d5e0436d9991b939de8111ee577b650520657fea531f2fbde5a6423a"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.347269 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.348083 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.348106 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.348114 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.349366 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6"} Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.349417 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8cfbdc101a49429663a78f562b14d0537c24775efd382cf7689b643efe3f6ace"} Nov 28 06:47:58 crc kubenswrapper[4889]: W1128 06:47:58.382829 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:58 crc kubenswrapper[4889]: E1128 06:47:58.382985 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:58 crc kubenswrapper[4889]: W1128 06:47:58.507423 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:58 crc kubenswrapper[4889]: E1128 06:47:58.507527 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:58 crc kubenswrapper[4889]: E1128 06:47:58.673146 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Nov 28 06:47:58 crc kubenswrapper[4889]: W1128 06:47:58.762912 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Nov 28 06:47:58 crc kubenswrapper[4889]: E1128 06:47:58.763002 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.929424 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.930782 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.930982 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.931078 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:58 crc kubenswrapper[4889]: I1128 06:47:58.931196 4889 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.274818 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.362533 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.362586 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.362598 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.362688 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.364458 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.364507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.364523 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.367483 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.367530 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.367546 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.367559 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.369074 4889 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a08aec2b97098e4ca9de414af1163097f87cc48db7051890be4aa28b4d64e2c6" exitCode=0 Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.369131 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a08aec2b97098e4ca9de414af1163097f87cc48db7051890be4aa28b4d64e2c6"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.369226 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.369949 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.369969 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.369978 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.373071 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a370a39260a5adc81e97d70a566dd47a37448a74bd5a12ca63c2f49b1c42352c"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.373088 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.374828 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.374849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.374857 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.388050 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.388121 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.388143 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9"} Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.388287 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.389339 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.389375 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:47:59 crc kubenswrapper[4889]: I1128 06:47:59.389393 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.395034 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502"} Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.395168 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.396380 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.396408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.396420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.399808 4889 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f336cead37a416f4f914656d1e829c81d38c0d0ab16d84c0025c9a8a9d6e2a40" exitCode=0 Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.399856 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f336cead37a416f4f914656d1e829c81d38c0d0ab16d84c0025c9a8a9d6e2a40"} Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.399983 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.400103 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.400133 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.401688 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.401771 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.401795 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.401900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.401939 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.401960 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.401997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.402058 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:00 crc kubenswrapper[4889]: I1128 06:48:00.402081 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:01 crc kubenswrapper[4889]: I1128 06:48:01.409973 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"594310436da8110bf2476797deaa2281c6fa2f26067d3ec907a0d838d3030030"} Nov 28 06:48:01 crc kubenswrapper[4889]: I1128 06:48:01.410044 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b96ecbbe95f86b14edadc56f797e28fdc9378da5e2c7219a9eba784401ae0ba7"} Nov 28 06:48:01 crc kubenswrapper[4889]: I1128 06:48:01.410067 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"24b5cf23ba556a9096e498adfe4e1f169e1fde37feec39c01c50fd3bfb657470"} Nov 28 06:48:01 crc kubenswrapper[4889]: I1128 06:48:01.410080 4889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:48:01 crc kubenswrapper[4889]: I1128 06:48:01.410162 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:01 crc kubenswrapper[4889]: I1128 06:48:01.411957 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:01 crc kubenswrapper[4889]: I1128 06:48:01.412018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:01 crc kubenswrapper[4889]: I1128 06:48:01.412041 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.078218 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.078438 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.079759 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.079804 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.079822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.192211 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.421188 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.421200 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dba379fbda1c94af8cc362c5b2290c2b0b8791f90741b754c22b35f0d10414d1"} Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.421299 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2dc00da3e01784ac5d582dd6e7cef5c4a84dd188e33114407630935164c1e745"} Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.421486 4889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.421582 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.423647 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.423680 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.423721 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.423817 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.423856 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.423887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:02 crc kubenswrapper[4889]: I1128 06:48:02.964787 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.423654 4889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.423755 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.423780 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.425279 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.425310 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.425322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.425788 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.425838 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:03 crc kubenswrapper[4889]: I1128 06:48:03.425862 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.436076 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.436374 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.438381 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.438443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.438456 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.445004 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.445226 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.446406 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.446532 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.446547 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.677921 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.678308 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.680295 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.680353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:04 crc kubenswrapper[4889]: I1128 06:48:04.680392 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:05 crc kubenswrapper[4889]: I1128 06:48:05.078777 4889 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 06:48:05 crc kubenswrapper[4889]: I1128 06:48:05.078906 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:48:07 crc kubenswrapper[4889]: I1128 06:48:07.422324 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 06:48:07 crc kubenswrapper[4889]: I1128 06:48:07.423123 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:07 crc kubenswrapper[4889]: I1128 06:48:07.424884 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:07 crc kubenswrapper[4889]: I1128 06:48:07.424949 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:07 crc kubenswrapper[4889]: I1128 06:48:07.424971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:07 crc kubenswrapper[4889]: E1128 06:48:07.426452 4889 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 06:48:08 crc kubenswrapper[4889]: I1128 06:48:08.576853 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:48:08 crc kubenswrapper[4889]: I1128 06:48:08.577187 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:08 crc kubenswrapper[4889]: I1128 06:48:08.583304 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:08 crc kubenswrapper[4889]: I1128 06:48:08.583356 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:08 crc kubenswrapper[4889]: I1128 06:48:08.583370 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:08 crc kubenswrapper[4889]: I1128 06:48:08.583545 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:48:08 crc kubenswrapper[4889]: I1128 06:48:08.914975 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:48:08 crc kubenswrapper[4889]: E1128 06:48:08.932523 4889 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 28 06:48:09 crc kubenswrapper[4889]: I1128 06:48:09.263969 4889 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:48:09 crc kubenswrapper[4889]: E1128 06:48:09.276568 4889 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 06:48:09 crc kubenswrapper[4889]: I1128 06:48:09.446413 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:09 crc kubenswrapper[4889]: I1128 06:48:09.448019 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:09 crc kubenswrapper[4889]: I1128 06:48:09.448099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:09 crc kubenswrapper[4889]: I1128 06:48:09.448123 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:10 crc kubenswrapper[4889]: E1128 06:48:10.188390 4889 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187c18d9afdbfc1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:47:57.259791387 +0000 UTC m=+0.230025582,LastTimestamp:2025-11-28 06:47:57.259791387 +0000 UTC m=+0.230025582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:48:10 crc kubenswrapper[4889]: W1128 06:48:10.256248 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.256463 4889 trace.go:236] Trace[1879485160]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:48:00.254) (total time: 10001ms): Nov 28 06:48:10 crc kubenswrapper[4889]: Trace[1879485160]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:48:10.256) Nov 28 06:48:10 crc kubenswrapper[4889]: Trace[1879485160]: [10.001871327s] [10.001871327s] END Nov 28 06:48:10 crc kubenswrapper[4889]: E1128 06:48:10.256509 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 06:48:10 crc kubenswrapper[4889]: E1128 06:48:10.274506 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Nov 28 06:48:10 crc kubenswrapper[4889]: W1128 06:48:10.419479 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.419585 4889 trace.go:236] Trace[1378676938]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:48:00.417) (total time: 10001ms): Nov 28 06:48:10 crc kubenswrapper[4889]: Trace[1378676938]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:48:10.419) Nov 28 06:48:10 crc kubenswrapper[4889]: Trace[1378676938]: [10.001661467s] [10.001661467s] END Nov 28 06:48:10 crc kubenswrapper[4889]: E1128 06:48:10.419605 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.448654 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.450220 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.450300 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.450322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.456572 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.533586 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.535698 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.535807 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.535829 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.535880 4889 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:48:10 crc kubenswrapper[4889]: W1128 06:48:10.859021 4889 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.859251 4889 trace.go:236] Trace[1831010893]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:48:00.856) (total time: 10002ms): Nov 28 06:48:10 crc kubenswrapper[4889]: Trace[1831010893]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (06:48:10.858) Nov 28 06:48:10 crc kubenswrapper[4889]: Trace[1831010893]: [10.002363433s] [10.002363433s] END Nov 28 06:48:10 crc kubenswrapper[4889]: E1128 06:48:10.859310 4889 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.993379 4889 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.993451 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.999020 4889 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 06:48:10 crc kubenswrapper[4889]: I1128 06:48:10.999143 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 06:48:11 crc kubenswrapper[4889]: I1128 06:48:11.451233 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:11 crc kubenswrapper[4889]: I1128 06:48:11.453542 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:11 crc kubenswrapper[4889]: I1128 06:48:11.453597 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:11 crc kubenswrapper[4889]: I1128 06:48:11.453614 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.198155 4889 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]log ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]etcd ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/priority-and-fairness-filter ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-apiextensions-informers ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-apiextensions-controllers ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/crd-informer-synced ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-system-namespaces-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 28 06:48:12 crc kubenswrapper[4889]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/bootstrap-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/start-kube-aggregator-informers ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/apiservice-registration-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/apiservice-discovery-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]autoregister-completion ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/apiservice-openapi-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 28 06:48:12 crc kubenswrapper[4889]: livez check failed Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.198244 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.413669 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.414083 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.415958 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.416027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.416040 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.450959 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.454161 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.455548 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.455632 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.455652 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:12 crc kubenswrapper[4889]: I1128 06:48:12.476917 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 06:48:13 crc kubenswrapper[4889]: I1128 06:48:13.458861 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:13 crc kubenswrapper[4889]: I1128 06:48:13.461148 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:13 crc kubenswrapper[4889]: I1128 06:48:13.461238 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:13 crc kubenswrapper[4889]: I1128 06:48:13.461258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:13 crc kubenswrapper[4889]: I1128 06:48:13.581356 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 06:48:13 crc kubenswrapper[4889]: I1128 06:48:13.620298 4889 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 28 06:48:14 crc kubenswrapper[4889]: I1128 06:48:14.217674 4889 csr.go:261] certificate signing request csr-977rk is approved, waiting to be issued Nov 28 06:48:14 crc kubenswrapper[4889]: I1128 06:48:14.224190 4889 csr.go:257] certificate signing request csr-977rk is issued Nov 28 06:48:15 crc kubenswrapper[4889]: I1128 06:48:15.078370 4889 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 06:48:15 crc kubenswrapper[4889]: I1128 06:48:15.078475 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 06:48:15 crc kubenswrapper[4889]: I1128 06:48:15.225936 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-28 06:43:14 +0000 UTC, rotation deadline is 2026-09-02 13:29:34.238505549 +0000 UTC Nov 28 06:48:15 crc kubenswrapper[4889]: I1128 06:48:15.226014 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6678h41m19.012495944s for next certificate rotation Nov 28 06:48:15 crc kubenswrapper[4889]: I1128 06:48:15.688509 4889 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 06:48:15 crc kubenswrapper[4889]: I1128 06:48:15.984379 4889 trace.go:236] Trace[496471163]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:48:01.357) (total time: 14626ms): Nov 28 06:48:15 crc kubenswrapper[4889]: Trace[496471163]: ---"Objects listed" error: 14626ms (06:48:15.984) Nov 28 06:48:15 crc kubenswrapper[4889]: Trace[496471163]: [14.626488524s] [14.626488524s] END Nov 28 06:48:15 crc kubenswrapper[4889]: I1128 06:48:15.984416 4889 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 06:48:15 crc kubenswrapper[4889]: I1128 06:48:15.984678 4889 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 06:48:15 crc kubenswrapper[4889]: E1128 06:48:15.985852 4889 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.001982 4889 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.056665 4889 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.056757 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.265770 4889 apiserver.go:52] "Watching apiserver" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.269521 4889 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.269972 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-8glkz","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.270435 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.270485 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.270528 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.270567 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.270588 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.271050 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.271212 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.271305 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.271444 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.271566 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8glkz" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.273365 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.273520 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.273691 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.274601 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.275139 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.275148 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.275172 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.275232 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.275290 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.275309 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.275411 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.275636 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.298604 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.315647 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.330891 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.340892 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.352216 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.363326 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.368634 4889 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.371845 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387719 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387765 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387784 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387799 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387816 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387832 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387850 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387867 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387886 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387906 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387926 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387971 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.387987 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388008 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388027 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388046 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388062 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388082 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388099 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388117 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388132 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388150 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388194 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388213 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388232 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388253 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388268 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388286 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388304 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388319 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388335 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388353 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388370 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388393 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388412 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388436 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388454 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388473 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388490 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388507 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388523 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388542 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388569 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388585 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388602 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388619 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388638 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388655 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388671 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388690 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388720 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388738 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388754 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388775 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388792 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388809 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388831 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388849 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388865 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388882 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388901 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388919 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388936 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388955 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388974 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.388994 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389012 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389031 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389052 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389071 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389089 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389106 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389122 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389139 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389163 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389178 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389194 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389214 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389280 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389297 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389311 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389326 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389342 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389359 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389375 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389391 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389415 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389431 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389446 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389467 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389484 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389501 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389517 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389532 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389549 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389565 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.389581 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391281 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391318 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391346 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391367 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391391 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391415 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391442 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391467 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391496 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391522 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391541 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391673 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.391702 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392113 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392120 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392115 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392124 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392187 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392216 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392396 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392413 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392435 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392489 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392536 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392589 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392637 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392639 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392690 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392719 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392752 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392791 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392827 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392818 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392858 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392877 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392893 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392933 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.392977 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393012 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393045 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393082 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393113 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393147 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393183 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393213 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393215 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393248 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393291 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393325 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393439 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393484 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393517 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393554 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393596 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393641 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393686 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393957 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394013 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394045 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394089 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394439 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394501 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394550 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394585 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394627 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394672 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394959 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395009 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395054 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395102 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395142 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395265 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395431 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395487 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395527 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395583 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395664 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395727 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395862 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395911 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395966 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.396012 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.396049 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.396158 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.396278 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.396333 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.402852 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393222 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393426 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393481 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393614 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393656 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393794 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393838 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.393975 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394122 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394176 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394186 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394231 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394344 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394573 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394598 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394674 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394734 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.394947 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395157 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395269 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395303 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395312 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395516 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.395546 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.396136 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.397020 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.399625 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.400843 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.401064 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.401265 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.401461 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.401644 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.410375 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.410428 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.410743 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.410768 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411024 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411228 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411465 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411528 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411579 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411743 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411798 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411858 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.411981 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.412038 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.412130 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.412141 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.412162 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.412355 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.412992 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.413255 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.413419 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.413502 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.413777 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.414128 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.414445 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.414325 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.414505 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.414898 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.415174 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.416731 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.416640 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.417056 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.417349 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.417406 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.417647 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.417682 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.417724 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.418012 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.418198 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.418513 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.418750 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.418843 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.419059 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.419170 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.419434 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.419562 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.419672 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.420068 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.420263 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.420316 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.420316 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.420724 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.420774 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.420771 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.420942 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421137 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421153 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.423264 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421425 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421475 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421503 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421693 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421751 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421921 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421988 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.422978 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.421227 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.423555 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.423743 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.423772 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.423804 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.423861 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.423995 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426321 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426374 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426405 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426430 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426457 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426482 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426510 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426538 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426562 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426579 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426602 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426625 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426643 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426665 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426688 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426781 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426801 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426823 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426844 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426861 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426883 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426903 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426923 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.426998 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427026 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427051 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427074 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427099 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427123 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e310263-912f-4269-81da-423af72f5ffc-hosts-file\") pod \"node-resolver-8glkz\" (UID: \"3e310263-912f-4269-81da-423af72f5ffc\") " pod="openshift-dns/node-resolver-8glkz" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427147 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427168 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427195 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427215 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzcqg\" (UniqueName: \"kubernetes.io/projected/3e310263-912f-4269-81da-423af72f5ffc-kube-api-access-rzcqg\") pod \"node-resolver-8glkz\" (UID: \"3e310263-912f-4269-81da-423af72f5ffc\") " pod="openshift-dns/node-resolver-8glkz" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427242 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427270 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427296 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427318 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427340 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427360 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427484 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427497 4889 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427509 4889 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427524 4889 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427537 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427548 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427562 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427572 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427583 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427594 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427607 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427617 4889 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427628 4889 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427638 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427653 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427664 4889 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427675 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427689 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427713 4889 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427724 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427737 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427758 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427769 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427783 4889 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427794 4889 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427806 4889 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427816 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427825 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427836 4889 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427849 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427858 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427868 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427881 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427892 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427902 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427913 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427925 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427935 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427945 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427957 4889 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427971 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427981 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.427991 4889 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428001 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428013 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428023 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428032 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428044 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428053 4889 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428064 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428074 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428087 4889 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428098 4889 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428108 4889 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428118 4889 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428131 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428140 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428149 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428161 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428171 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428180 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428191 4889 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428204 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428214 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428224 4889 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428236 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428248 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428257 4889 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428267 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428276 4889 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428288 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428299 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428309 4889 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428320 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428330 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428340 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428351 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428362 4889 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428373 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428382 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428392 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428404 4889 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428414 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428423 4889 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428435 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428445 4889 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428455 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428464 4889 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428475 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428485 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428494 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428504 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428515 4889 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428525 4889 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428534 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428544 4889 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428556 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428565 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428574 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428586 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428596 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428606 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428615 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428626 4889 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428636 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428658 4889 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428681 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428693 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428716 4889 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428726 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428736 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428749 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428861 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428873 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428886 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.428897 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.429935 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.430460 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.430626 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:16.930593851 +0000 UTC m=+19.900828006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.431055 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.431124 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.431307 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.431568 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.431970 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.432076 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.432595 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.432727 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.432750 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.433044 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.433107 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.433252 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.433283 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.433537 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.433758 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.433682 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.433931 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.434069 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.434104 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.434333 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.434472 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.434561 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.434935 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.434977 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.435337 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.435681 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.435745 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.436187 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.436534 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.436960 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.437219 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.437421 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.437624 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.437952 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.438352 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.438459 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.438622 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:16.938581094 +0000 UTC m=+19.908815249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.439052 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.439327 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.439325 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.439977 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.440188 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.440533 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.441028 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.441318 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.441551 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.441642 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:16.941617532 +0000 UTC m=+19.911851687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.441937 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.442083 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.442113 4889 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.442202 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.435990 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.442363 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.442466 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.442417 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.442518 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.442779 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.444010 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.453661 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.454016 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.454944 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.455461 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.456080 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.456507 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.456651 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.459694 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.459922 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.460171 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.460383 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.460939 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.461113 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.461155 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.461367 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.461581 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.461862 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.462078 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.463208 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.463875 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.463979 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.464070 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.464192 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:16.964170175 +0000 UTC m=+19.934404330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.463842 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.465258 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.467592 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.469817 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.472966 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.474030 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.474058 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.474075 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.474131 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:16.974112308 +0000 UTC m=+19.944346463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.474414 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.474423 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.476357 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502" exitCode=255 Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.476403 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502"} Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.478427 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.479646 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.482816 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.485969 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.490054 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.497107 4889 scope.go:117] "RemoveContainer" containerID="77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.504733 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.508738 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.509652 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.519001 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.520061 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530281 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e310263-912f-4269-81da-423af72f5ffc-hosts-file\") pod \"node-resolver-8glkz\" (UID: \"3e310263-912f-4269-81da-423af72f5ffc\") " pod="openshift-dns/node-resolver-8glkz" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530333 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530353 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzcqg\" (UniqueName: \"kubernetes.io/projected/3e310263-912f-4269-81da-423af72f5ffc-kube-api-access-rzcqg\") pod \"node-resolver-8glkz\" (UID: \"3e310263-912f-4269-81da-423af72f5ffc\") " pod="openshift-dns/node-resolver-8glkz" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530381 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530424 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530434 4889 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530444 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530452 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530459 4889 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530468 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530476 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530485 4889 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530496 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530504 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530514 4889 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530522 4889 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530530 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530537 4889 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530546 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530555 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530563 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530571 4889 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530580 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530588 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530597 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530607 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530615 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530623 4889 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530635 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530644 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530655 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530663 4889 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530672 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530682 4889 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530693 4889 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530717 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530729 4889 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530759 4889 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530768 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530777 4889 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530785 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530794 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530803 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530811 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530821 4889 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530832 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530844 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530854 4889 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530866 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530876 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530888 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530899 4889 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530907 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530916 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530925 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530933 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530944 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530953 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530963 4889 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530971 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530981 4889 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.530991 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531001 4889 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531012 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531023 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531035 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531044 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531054 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531063 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531073 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531086 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531097 4889 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531109 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531120 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531131 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531142 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531155 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531174 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531193 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531201 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531210 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531266 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531329 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e310263-912f-4269-81da-423af72f5ffc-hosts-file\") pod \"node-resolver-8glkz\" (UID: \"3e310263-912f-4269-81da-423af72f5ffc\") " pod="openshift-dns/node-resolver-8glkz" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.531412 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.535785 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.541606 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.549997 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.558144 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzcqg\" (UniqueName: \"kubernetes.io/projected/3e310263-912f-4269-81da-423af72f5ffc-kube-api-access-rzcqg\") pod \"node-resolver-8glkz\" (UID: \"3e310263-912f-4269-81da-423af72f5ffc\") " pod="openshift-dns/node-resolver-8glkz" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.571202 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.584831 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.585945 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.591677 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.600769 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.601326 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.604776 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8glkz" Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.632140 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:16 crc kubenswrapper[4889]: W1128 06:48:16.636812 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f4d71d5f811c5995295b8fa1b70e778e85a131ebccb5564bd8feecae4446a0c0 WatchSource:0}: Error finding container f4d71d5f811c5995295b8fa1b70e778e85a131ebccb5564bd8feecae4446a0c0: Status 404 returned error can't find the container with id f4d71d5f811c5995295b8fa1b70e778e85a131ebccb5564bd8feecae4446a0c0 Nov 28 06:48:16 crc kubenswrapper[4889]: W1128 06:48:16.642913 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7f7dbd23e11dc5cd064ef79ca5456e415ed26918cdf98fd22d8ef52acc5a7388 WatchSource:0}: Error finding container 7f7dbd23e11dc5cd064ef79ca5456e415ed26918cdf98fd22d8ef52acc5a7388: Status 404 returned error can't find the container with id 7f7dbd23e11dc5cd064ef79ca5456e415ed26918cdf98fd22d8ef52acc5a7388 Nov 28 06:48:16 crc kubenswrapper[4889]: W1128 06:48:16.644315 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e310263_912f_4269_81da_423af72f5ffc.slice/crio-1ac3ce9adb673707f17d21b4078b60bd63f8b8b961086327ee5604b9fccccb54 WatchSource:0}: Error finding container 1ac3ce9adb673707f17d21b4078b60bd63f8b8b961086327ee5604b9fccccb54: Status 404 returned error can't find the container with id 1ac3ce9adb673707f17d21b4078b60bd63f8b8b961086327ee5604b9fccccb54 Nov 28 06:48:16 crc kubenswrapper[4889]: I1128 06:48:16.934769 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:16 crc kubenswrapper[4889]: E1128 06:48:16.935066 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:17.935036905 +0000 UTC m=+20.905271060 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.036283 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.036325 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.036345 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.036396 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.036617 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.036642 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.036675 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.036745 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:18.036719352 +0000 UTC m=+21.006953517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.036756 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.036768 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:18.036758453 +0000 UTC m=+21.006992628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.036777 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.036861 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:18.036835795 +0000 UTC m=+21.007070130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.037296 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.037319 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.037331 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.037385 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:18.037366668 +0000 UTC m=+21.007600823 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.159086 4889 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.185759 4889 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186022 4889 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186061 4889 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186087 4889 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186116 4889 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186115 4889 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186066 4889 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186159 4889 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186191 4889 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186193 4889 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186249 4889 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186278 4889 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186283 4889 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.186331 4889 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.196755 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.215641 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.240756 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.265492 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.277815 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.291272 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.303388 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.313623 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.329150 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.331337 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.331507 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.335253 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.336371 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.338521 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.339561 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.341020 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.341730 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.342538 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.344140 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.345236 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.346272 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.346698 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.347551 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.349803 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.350670 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.351476 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.352529 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.353052 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.354045 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.354450 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.355105 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.356308 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.356764 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.357715 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.358147 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.359282 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.359801 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.360411 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.361614 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.362167 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.363411 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.363551 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.363943 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.364831 4889 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.364939 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.366524 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.372972 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.373753 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.375684 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.376761 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.377765 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.378367 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.379427 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.379895 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.381094 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.381752 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.381932 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.382805 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.383290 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.384843 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.385438 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.386513 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.387185 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.387805 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.388421 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.389049 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.389764 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.390350 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.393343 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.402514 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kwbr9"] Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.403223 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.404071 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.405337 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.405542 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.405542 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.406172 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.411186 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.423645 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.454651 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.474232 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.481199 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.483374 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.483770 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.484748 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7f7dbd23e11dc5cd064ef79ca5456e415ed26918cdf98fd22d8ef52acc5a7388"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.486008 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8glkz" event={"ID":"3e310263-912f-4269-81da-423af72f5ffc","Type":"ContainerStarted","Data":"84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.486072 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8glkz" event={"ID":"3e310263-912f-4269-81da-423af72f5ffc","Type":"ContainerStarted","Data":"1ac3ce9adb673707f17d21b4078b60bd63f8b8b961086327ee5604b9fccccb54"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.488868 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.489222 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.489247 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.489259 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f4d71d5f811c5995295b8fa1b70e778e85a131ebccb5564bd8feecae4446a0c0"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.497062 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.497157 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"56fb7d5a606aa8dfc5fe83f08cbf5e202f97f37fea9d9b775f0ce6a82829d4fa"} Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.498152 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.513467 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.529965 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.544275 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a6707da-48a9-4e38-a1b2-df82148f0cd2-proxy-tls\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.544333 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a6707da-48a9-4e38-a1b2-df82148f0cd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.544376 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a6707da-48a9-4e38-a1b2-df82148f0cd2-rootfs\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.544408 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btx88\" (UniqueName: \"kubernetes.io/projected/6a6707da-48a9-4e38-a1b2-df82148f0cd2-kube-api-access-btx88\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.548973 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.585351 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.598596 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.612762 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.627404 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.642694 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.644874 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a6707da-48a9-4e38-a1b2-df82148f0cd2-rootfs\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.644913 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btx88\" (UniqueName: \"kubernetes.io/projected/6a6707da-48a9-4e38-a1b2-df82148f0cd2-kube-api-access-btx88\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.644958 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a6707da-48a9-4e38-a1b2-df82148f0cd2-proxy-tls\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.645001 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a6707da-48a9-4e38-a1b2-df82148f0cd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.645244 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a6707da-48a9-4e38-a1b2-df82148f0cd2-rootfs\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.645788 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a6707da-48a9-4e38-a1b2-df82148f0cd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.653221 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a6707da-48a9-4e38-a1b2-df82148f0cd2-proxy-tls\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.662149 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.674079 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btx88\" (UniqueName: \"kubernetes.io/projected/6a6707da-48a9-4e38-a1b2-df82148f0cd2-kube-api-access-btx88\") pod \"machine-config-daemon-kwbr9\" (UID: \"6a6707da-48a9-4e38-a1b2-df82148f0cd2\") " pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.676897 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.714365 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.716431 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:48:17 crc kubenswrapper[4889]: W1128 06:48:17.735450 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a6707da_48a9_4e38_a1b2_df82148f0cd2.slice/crio-f4ed77c791a9468adb2dafbcc683d84c41fa0164d81a3c06729fa95c8dfa09d4 WatchSource:0}: Error finding container f4ed77c791a9468adb2dafbcc683d84c41fa0164d81a3c06729fa95c8dfa09d4: Status 404 returned error can't find the container with id f4ed77c791a9468adb2dafbcc683d84c41fa0164d81a3c06729fa95c8dfa09d4 Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.767305 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.821928 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.864916 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vtjm7"] Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.865415 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.870413 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.870774 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.871229 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.871468 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.873214 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-m98zh"] Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.874994 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.875921 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.877888 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.877991 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.881669 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2l6bn"] Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.886678 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.891447 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.892162 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.892321 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.892593 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.893133 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.893380 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.893612 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.896235 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.931030 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.944437 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948328 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948436 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4901957d-ef15-4af5-a61b-b3d632c871d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948465 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-system-cni-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948483 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-cnibin\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948500 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-cni-multus\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948518 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-system-cni-dir\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948536 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4fxg\" (UniqueName: \"kubernetes.io/projected/4901957d-ef15-4af5-a61b-b3d632c871d4-kube-api-access-t4fxg\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948576 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-cni-bin\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948592 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-cnibin\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948608 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-conf-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948624 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-etc-kubernetes\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948641 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-cni-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948656 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-hostroot\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948673 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-multus-certs\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948693 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68ddfdcf-000e-45ae-a737-d3dd28115d5b-cni-binary-copy\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948732 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-daemon-config\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948748 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x69mv\" (UniqueName: \"kubernetes.io/projected/68ddfdcf-000e-45ae-a737-d3dd28115d5b-kube-api-access-x69mv\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948772 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-kubelet\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948796 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948813 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-os-release\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948831 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-socket-dir-parent\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948851 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-k8s-cni-cncf-io\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948874 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4901957d-ef15-4af5-a61b-b3d632c871d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948903 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-netns\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:17 crc kubenswrapper[4889]: E1128 06:48:17.948930 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:19.948906729 +0000 UTC m=+22.919140884 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.948974 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-os-release\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.960260 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:17 crc kubenswrapper[4889]: I1128 06:48:17.983330 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.002935 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.021927 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.037335 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.043961 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050373 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-netns\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050419 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-log-socket\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050446 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-os-release\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050467 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-bin\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050488 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-config\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050514 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-etc-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050539 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050561 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4901957d-ef15-4af5-a61b-b3d632c871d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050584 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-var-lib-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050617 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovn-node-metrics-cert\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050719 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-system-cni-dir\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050892 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4fxg\" (UniqueName: \"kubernetes.io/projected/4901957d-ef15-4af5-a61b-b3d632c871d4-kube-api-access-t4fxg\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.050924 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-system-cni-dir\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051039 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-os-release\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051037 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-system-cni-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051082 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-cnibin\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051090 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-system-cni-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051109 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-cni-multus\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051139 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-cnibin\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051139 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-ovn-kubernetes\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051175 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-cni-multus\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051195 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051230 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-node-log\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051249 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-netd\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051277 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.051298 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051313 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-env-overrides\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.051315 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051368 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-cni-bin\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.051373 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.051382 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051394 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-cnibin\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051428 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-cnibin\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.051433 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:20.051410647 +0000 UTC m=+23.021644992 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051454 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051478 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-conf-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.051490 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:20.051475979 +0000 UTC m=+23.021710134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051508 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-conf-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051459 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-cni-bin\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051514 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-etc-kubernetes\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051548 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-script-lib\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051571 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-cni-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051591 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-hostroot\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051620 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-etc-kubernetes\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051651 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-multus-certs\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051658 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-hostroot\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051621 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-multus-certs\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051721 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxwh\" (UniqueName: \"kubernetes.io/projected/6de1d273-3dcf-4772-bc88-323f46e1ead5-kube-api-access-tvxwh\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051743 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68ddfdcf-000e-45ae-a737-d3dd28115d5b-cni-binary-copy\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051743 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4901957d-ef15-4af5-a61b-b3d632c871d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051806 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-cni-dir\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051762 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-daemon-config\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051851 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x69mv\" (UniqueName: \"kubernetes.io/projected/68ddfdcf-000e-45ae-a737-d3dd28115d5b-kube-api-access-x69mv\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051871 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-kubelet\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051893 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-kubelet\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051914 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051945 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051973 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-socket-dir-parent\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.051992 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-k8s-cni-cncf-io\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052017 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-os-release\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052035 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4901957d-ef15-4af5-a61b-b3d632c871d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052055 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-systemd-units\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052073 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-ovn\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052096 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052116 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-netns\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052133 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-slash\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052157 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-systemd\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052462 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-daemon-config\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052529 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-os-release\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052546 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68ddfdcf-000e-45ae-a737-d3dd28115d5b-cni-binary-copy\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052565 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-var-lib-kubelet\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052670 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-netns\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.052695 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.052741 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.052757 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.052785 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052804 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4901957d-ef15-4af5-a61b-b3d632c871d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.052820 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:20.052798332 +0000 UTC m=+23.023032687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.052848 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:20.052837593 +0000 UTC m=+23.023071968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052874 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-host-run-k8s-cni-cncf-io\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.052957 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/68ddfdcf-000e-45ae-a737-d3dd28115d5b-multus-socket-dir-parent\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.053133 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4901957d-ef15-4af5-a61b-b3d632c871d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.061344 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.072927 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4fxg\" (UniqueName: \"kubernetes.io/projected/4901957d-ef15-4af5-a61b-b3d632c871d4-kube-api-access-t4fxg\") pod \"multus-additional-cni-plugins-m98zh\" (UID: \"4901957d-ef15-4af5-a61b-b3d632c871d4\") " pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.073408 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x69mv\" (UniqueName: \"kubernetes.io/projected/68ddfdcf-000e-45ae-a737-d3dd28115d5b-kube-api-access-x69mv\") pod \"multus-vtjm7\" (UID: \"68ddfdcf-000e-45ae-a737-d3dd28115d5b\") " pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.079865 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.095959 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.109980 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.131669 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.147724 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153051 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-systemd-units\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153169 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-ovn\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153133 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-systemd-units\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153245 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-slash\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153260 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-ovn\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153266 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-systemd\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153308 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-netns\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153327 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-log-socket\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153330 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-systemd\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153333 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-slash\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153374 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-log-socket\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153355 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-netns\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153347 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-bin\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153376 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-bin\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153414 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-config\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153434 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-etc-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153457 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153481 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-var-lib-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153500 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovn-node-metrics-cert\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153518 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-ovn-kubernetes\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153529 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153542 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-node-log\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153556 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-etc-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153557 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-netd\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153584 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-env-overrides\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153601 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153619 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-script-lib\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153636 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-kubelet\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153653 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxwh\" (UniqueName: \"kubernetes.io/projected/6de1d273-3dcf-4772-bc88-323f46e1ead5-kube-api-access-tvxwh\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153905 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-ovn-kubernetes\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153935 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-node-log\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.153957 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-netd\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.154093 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-var-lib-openvswitch\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.154304 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-config\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.154364 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.154378 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-env-overrides\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.154401 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-kubelet\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.154686 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-script-lib\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.156534 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovn-node-metrics-cert\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.166052 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.168106 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.171085 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxwh\" (UniqueName: \"kubernetes.io/projected/6de1d273-3dcf-4772-bc88-323f46e1ead5-kube-api-access-tvxwh\") pod \"ovnkube-node-2l6bn\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.179338 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vtjm7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.189642 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.196983 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m98zh" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.204163 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:18 crc kubenswrapper[4889]: W1128 06:48:18.216302 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4901957d_ef15_4af5_a61b_b3d632c871d4.slice/crio-4fab0078f938067b90302aa887c110be1b08df6451a80d7795476986c342a2b4 WatchSource:0}: Error finding container 4fab0078f938067b90302aa887c110be1b08df6451a80d7795476986c342a2b4: Status 404 returned error can't find the container with id 4fab0078f938067b90302aa887c110be1b08df6451a80d7795476986c342a2b4 Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.269823 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.307263 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.331780 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.332001 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.332141 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:18 crc kubenswrapper[4889]: E1128 06:48:18.332223 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.352195 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.381201 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.439112 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.502993 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"a9829ea12def74cb959107004588798ce745c8954d3cd73c1eea2d9f52f78eab"} Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.506912 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" event={"ID":"4901957d-ef15-4af5-a61b-b3d632c871d4","Type":"ContainerStarted","Data":"4fab0078f938067b90302aa887c110be1b08df6451a80d7795476986c342a2b4"} Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.508356 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vtjm7" event={"ID":"68ddfdcf-000e-45ae-a737-d3dd28115d5b","Type":"ContainerStarted","Data":"1b9e0e47e302353fd45b9a096c38ab07c2689c801b9f247f2ac252d1b69b1e04"} Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.513755 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4"} Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.513823 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f"} Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.513841 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"f4ed77c791a9468adb2dafbcc683d84c41fa0164d81a3c06729fa95c8dfa09d4"} Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.533872 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.553858 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.574894 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.579272 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.588957 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.593342 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.608456 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.611926 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.622535 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.635358 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.648340 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.648661 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.662013 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.667816 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.674032 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.694896 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:18 crc kubenswrapper[4889]: I1128 06:48:18.718633 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.186110 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.189576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.189652 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.189667 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.189772 4889 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.197781 4889 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.198122 4889 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.199422 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.199453 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.199465 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.199487 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.199503 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: E1128 06:48:19.218270 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.221559 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.221584 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.221593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.221608 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.221623 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: E1128 06:48:19.233441 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.237044 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.237070 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.237078 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.237093 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.237104 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: E1128 06:48:19.250267 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.253585 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.253614 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.253624 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.253639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.253650 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: E1128 06:48:19.266985 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.271060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.271079 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.271088 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.271110 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.271120 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: E1128 06:48:19.286785 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: E1128 06:48:19.286932 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.289579 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.289635 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.289652 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.289679 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.289703 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.331604 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:19 crc kubenswrapper[4889]: E1128 06:48:19.331952 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.394040 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.394117 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.394135 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.394163 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.394186 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.496950 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.496986 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.497024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.497041 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.497052 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.520984 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa" exitCode=0 Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.521081 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.524495 4889 generic.go:334] "Generic (PLEG): container finished" podID="4901957d-ef15-4af5-a61b-b3d632c871d4" containerID="f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d" exitCode=0 Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.525074 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" event={"ID":"4901957d-ef15-4af5-a61b-b3d632c871d4","Type":"ContainerDied","Data":"f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.530721 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vtjm7" event={"ID":"68ddfdcf-000e-45ae-a737-d3dd28115d5b","Type":"ContainerStarted","Data":"c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.535617 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.547605 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.568806 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.595565 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.601088 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.601159 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.601173 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.601194 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.601206 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.616217 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.631318 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.655910 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.671227 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.690420 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.704293 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.704346 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.704359 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.704380 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.704394 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.706982 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.725402 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.742403 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.754611 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.774519 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.776417 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-48xq6"] Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.776832 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.778833 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.779607 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.780097 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.780365 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.789055 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.803363 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.807884 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.807914 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.807922 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.807937 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.807947 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.823190 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.840559 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.861454 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.872336 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/473fe0ca-e884-4f0a-8c28-4994f487ca5c-serviceca\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.872379 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhr52\" (UniqueName: \"kubernetes.io/projected/473fe0ca-e884-4f0a-8c28-4994f487ca5c-kube-api-access-vhr52\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.872404 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/473fe0ca-e884-4f0a-8c28-4994f487ca5c-host\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.881145 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.895046 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.911455 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.911500 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.911512 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.911531 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.911544 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:19Z","lastTransitionTime":"2025-11-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.911610 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.926827 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.948337 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.964453 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.973748 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.973928 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/473fe0ca-e884-4f0a-8c28-4994f487ca5c-serviceca\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.973953 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhr52\" (UniqueName: \"kubernetes.io/projected/473fe0ca-e884-4f0a-8c28-4994f487ca5c-kube-api-access-vhr52\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.973976 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/473fe0ca-e884-4f0a-8c28-4994f487ca5c-host\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.974048 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/473fe0ca-e884-4f0a-8c28-4994f487ca5c-host\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: E1128 06:48:19.974154 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:23.974129883 +0000 UTC m=+26.944364038 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.975000 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/473fe0ca-e884-4f0a-8c28-4994f487ca5c-serviceca\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.983454 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:19Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:19 crc kubenswrapper[4889]: I1128 06:48:19.994012 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhr52\" (UniqueName: \"kubernetes.io/projected/473fe0ca-e884-4f0a-8c28-4994f487ca5c-kube-api-access-vhr52\") pod \"node-ca-48xq6\" (UID: \"473fe0ca-e884-4f0a-8c28-4994f487ca5c\") " pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.005358 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.016628 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.016676 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.016687 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.016733 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.016745 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.021456 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.037994 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.055693 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.072578 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.074966 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.075018 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.075051 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.075070 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075111 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075158 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075182 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:24.075161113 +0000 UTC m=+27.045395258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075201 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:24.075194294 +0000 UTC m=+27.045428449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075253 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075262 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075265 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075274 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075278 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075284 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075311 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:24.075299927 +0000 UTC m=+27.045534082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.075324 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:24.075318967 +0000 UTC m=+27.045553122 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.083314 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.095791 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.106459 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.119012 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.119038 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.119047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.119062 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.119072 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.121390 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-48xq6" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.124567 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: W1128 06:48:20.137390 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod473fe0ca_e884_4f0a_8c28_4994f487ca5c.slice/crio-2c869df7a1e9cf411c1b40054167f149e75294cf63bd17bc88c22516d9826194 WatchSource:0}: Error finding container 2c869df7a1e9cf411c1b40054167f149e75294cf63bd17bc88c22516d9826194: Status 404 returned error can't find the container with id 2c869df7a1e9cf411c1b40054167f149e75294cf63bd17bc88c22516d9826194 Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.141148 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.156579 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.177505 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.227099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.227493 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.227506 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.227527 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.227541 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.331616 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.331819 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.331627 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.331882 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.331916 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.331930 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.331998 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.332017 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: E1128 06:48:20.332093 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.434666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.434735 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.434745 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.434766 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.434778 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.538508 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.538551 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.538560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.538897 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.538920 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.543412 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.543673 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.543792 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.543852 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.543906 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.543966 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.547418 4889 generic.go:334] "Generic (PLEG): container finished" podID="4901957d-ef15-4af5-a61b-b3d632c871d4" containerID="c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a" exitCode=0 Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.547464 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" event={"ID":"4901957d-ef15-4af5-a61b-b3d632c871d4","Type":"ContainerDied","Data":"c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.548879 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-48xq6" event={"ID":"473fe0ca-e884-4f0a-8c28-4994f487ca5c","Type":"ContainerStarted","Data":"e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.548969 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-48xq6" event={"ID":"473fe0ca-e884-4f0a-8c28-4994f487ca5c","Type":"ContainerStarted","Data":"2c869df7a1e9cf411c1b40054167f149e75294cf63bd17bc88c22516d9826194"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.572786 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.590378 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.601513 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.611631 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.626939 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.642531 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.642589 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.642603 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.642631 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.642646 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.644602 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.658122 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.673932 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.690022 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.706893 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.723173 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.737522 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.746609 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.746658 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.746669 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.746687 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.746700 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.749624 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.766086 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.780995 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.793544 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.808434 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.824509 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.837879 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.849330 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.849379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.849392 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.849418 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.849436 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.852805 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.865943 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.896323 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.939389 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.952005 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.952079 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.952099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.952132 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.952153 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:20Z","lastTransitionTime":"2025-11-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:20 crc kubenswrapper[4889]: I1128 06:48:20.978250 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:20Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.022535 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.055020 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.055485 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.055660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.055875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.056049 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.078215 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.159831 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.159881 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.159895 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.159921 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.159937 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.263635 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.263698 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.263728 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.263750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.263762 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.331551 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:21 crc kubenswrapper[4889]: E1128 06:48:21.331852 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.366833 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.366898 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.366915 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.366940 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.366958 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.469780 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.469818 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.469829 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.469845 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.469855 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.564104 4889 generic.go:334] "Generic (PLEG): container finished" podID="4901957d-ef15-4af5-a61b-b3d632c871d4" containerID="20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef" exitCode=0 Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.564169 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" event={"ID":"4901957d-ef15-4af5-a61b-b3d632c871d4","Type":"ContainerDied","Data":"20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.573379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.573581 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.573740 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.573886 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.574014 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.594654 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.620045 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.632989 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.650430 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.663331 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.678226 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.678355 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.678400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.678417 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.678444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.678461 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.695554 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.717309 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.731920 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.745299 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.758766 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.777818 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.781275 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.781315 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.781326 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.781343 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.781352 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.796478 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.884975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.885041 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.885056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.885082 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.885118 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.987911 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.988197 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.988279 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.988382 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:21 crc kubenswrapper[4889]: I1128 06:48:21.988472 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:21Z","lastTransitionTime":"2025-11-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.084895 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.091843 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.092954 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.092985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.092998 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.093016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.093032 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.100807 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.112790 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.138918 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.155878 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.174855 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.189824 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.195214 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.195412 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.195587 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.195771 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.195940 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.204766 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.220469 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.240090 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.258562 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.276537 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.292666 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.298712 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.298748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.298757 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.298774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.298785 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.314941 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.331004 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:22 crc kubenswrapper[4889]: E1128 06:48:22.331149 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.331004 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:22 crc kubenswrapper[4889]: E1128 06:48:22.331473 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.333408 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.345844 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.364812 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.381126 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.408518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.408535 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.408568 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.408779 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.408809 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.408823 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.427407 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.443536 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.456841 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.471427 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.489230 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.511662 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.511722 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.511736 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.511755 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.511767 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.518505 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.562422 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.570881 4889 generic.go:334] "Generic (PLEG): container finished" podID="4901957d-ef15-4af5-a61b-b3d632c871d4" containerID="ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43" exitCode=0 Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.570921 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" event={"ID":"4901957d-ef15-4af5-a61b-b3d632c871d4","Type":"ContainerDied","Data":"ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.576741 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.599020 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.615942 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.615981 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.615993 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.616010 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.616022 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.643995 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.682892 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.721598 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.723132 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.723208 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.723229 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.723259 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.723283 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.758829 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.801023 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.829477 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.829514 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.829524 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.829540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.829552 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.839974 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.875898 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.917581 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.931394 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.931440 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.931450 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.931467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.931483 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:22Z","lastTransitionTime":"2025-11-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.959045 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:22 crc kubenswrapper[4889]: I1128 06:48:22.997201 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:22Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.048959 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.048997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.049007 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.049023 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.049034 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.060354 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.077170 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.116324 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.164290 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.164351 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.164369 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.164397 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.164414 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.167668 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.201369 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.235101 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.267536 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.267585 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.267595 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.267614 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.267626 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.331798 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:23 crc kubenswrapper[4889]: E1128 06:48:23.331953 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.370501 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.370568 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.370587 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.370614 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.370634 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.473258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.473292 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.473304 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.473323 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.473341 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.578465 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.578538 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.578561 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.578592 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.578616 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.583050 4889 generic.go:334] "Generic (PLEG): container finished" podID="4901957d-ef15-4af5-a61b-b3d632c871d4" containerID="29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4" exitCode=0 Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.583100 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" event={"ID":"4901957d-ef15-4af5-a61b-b3d632c871d4","Type":"ContainerDied","Data":"29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.606606 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.619756 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.635490 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.651982 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.668284 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.682335 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.682376 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.682389 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.682408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.682422 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.682820 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.694450 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.707289 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.724322 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.740394 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.757656 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.769337 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.784089 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.784225 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.784253 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.784263 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.784280 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.784293 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.819046 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:23Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.886877 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.886923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.886934 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.886953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.886965 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.989011 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.989060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.989072 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.989094 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:23 crc kubenswrapper[4889]: I1128 06:48:23.989110 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:23Z","lastTransitionTime":"2025-11-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.024758 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.025025 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:32.025005162 +0000 UTC m=+34.995239307 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.091841 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.091878 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.091887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.091903 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.091913 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.126150 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.126218 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.126271 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.126321 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126398 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126484 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126506 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126540 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126403 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126616 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126645 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126521 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:32.126489194 +0000 UTC m=+35.096723369 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126559 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126720 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:32.126679999 +0000 UTC m=+35.096914154 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126740 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:32.12673134 +0000 UTC m=+35.096965495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.126786 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:32.126766901 +0000 UTC m=+35.097001076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.195908 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.195998 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.196013 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.196039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.196052 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.299415 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.299510 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.299555 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.299603 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.299629 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.330939 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.331035 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.331120 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:24 crc kubenswrapper[4889]: E1128 06:48:24.331266 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.403748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.403826 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.403849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.403884 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.403907 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.506526 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.506618 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.506642 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.506673 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.506693 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.590613 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" event={"ID":"4901957d-ef15-4af5-a61b-b3d632c871d4","Type":"ContainerDied","Data":"8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.591322 4889 generic.go:334] "Generic (PLEG): container finished" podID="4901957d-ef15-4af5-a61b-b3d632c871d4" containerID="8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95" exitCode=0 Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.611807 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.611867 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.611880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.611901 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.611915 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.613505 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.630899 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.648325 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.665764 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.680005 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.693150 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.710975 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.715748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.715817 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.715832 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.715851 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.715862 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.727145 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.744480 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.756586 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.769473 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.787109 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.800915 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.812425 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:24Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.818668 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.818716 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.818732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.818750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.818764 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.922258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.922321 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.922347 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.922383 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:24 crc kubenswrapper[4889]: I1128 06:48:24.922413 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:24Z","lastTransitionTime":"2025-11-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.025721 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.025775 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.025790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.025809 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.025823 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.128899 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.128956 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.128971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.129027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.129048 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.232764 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.232828 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.232846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.232868 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.232885 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.331433 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:25 crc kubenswrapper[4889]: E1128 06:48:25.331673 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.336848 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.336907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.336919 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.336940 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.336952 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.439873 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.439918 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.439927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.439945 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.439955 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.542833 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.542906 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.542924 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.542953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.542971 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.601974 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" event={"ID":"4901957d-ef15-4af5-a61b-b3d632c871d4","Type":"ContainerStarted","Data":"b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.608592 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.609082 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.626899 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.644318 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.646203 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.646290 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.646303 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.646331 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.646347 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.648200 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.668873 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.685166 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.704290 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.721469 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.733888 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.749543 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.749669 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.749797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.749905 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.749987 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.754313 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.771360 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.787650 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.801809 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.820186 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.837730 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.852074 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.852116 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.852129 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.852144 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.852155 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.859930 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.873810 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.889958 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.906052 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.923014 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.941782 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.955234 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.955281 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.955293 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.955310 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.955323 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:25Z","lastTransitionTime":"2025-11-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.960000 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.978364 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:25 crc kubenswrapper[4889]: I1128 06:48:25.997902 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.015767 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.033739 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.053257 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.058466 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.058501 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.058511 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.058529 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.058542 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.078447 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.096593 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.111242 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.161588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.161641 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.161653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.161674 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.161691 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.264955 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.265025 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.265035 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.265056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.265067 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.331075 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.331089 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:26 crc kubenswrapper[4889]: E1128 06:48:26.331330 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:26 crc kubenswrapper[4889]: E1128 06:48:26.331411 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.368048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.368103 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.368117 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.368139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.368153 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.470625 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.470700 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.470739 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.470763 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.470780 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.572849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.572896 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.572905 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.572922 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.572933 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.615080 4889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.616003 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.676256 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.676297 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.676306 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.676321 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.676330 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.690845 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.706737 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.724273 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.739172 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.758131 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.772836 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.781374 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.781425 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.781436 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.781455 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.781466 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.786429 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.801915 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.820015 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.835247 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.848496 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.862876 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.884297 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.884420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.884501 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.884609 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.884697 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.884919 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.909719 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.925228 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.987558 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.987594 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.987605 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.987620 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:26 crc kubenswrapper[4889]: I1128 06:48:26.987631 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:26Z","lastTransitionTime":"2025-11-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.089512 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.089553 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.089562 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.089578 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.089592 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.191475 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.191513 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.191521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.191541 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.191552 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.293935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.293975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.293989 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.294030 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.294043 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.331307 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:27 crc kubenswrapper[4889]: E1128 06:48:27.332159 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.346631 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.364189 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.381468 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.395662 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.397037 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.397167 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.397223 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.397310 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.397365 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.411001 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.440541 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.460903 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.477080 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.497511 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.501308 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.501377 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.501404 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.501434 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.501454 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.517795 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.537294 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.555768 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.570630 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.589805 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.605441 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.605496 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.605510 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.605532 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.605546 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.622406 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/0.log" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.626639 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1" exitCode=1 Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.626770 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.627456 4889 scope.go:117] "RemoveContainer" containerID="f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.643456 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.671163 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.709306 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.709729 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.709824 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.709918 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.709997 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.725676 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.747436 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.766461 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.779069 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.800313 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.817130 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.817208 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.817226 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.817255 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.817274 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.820350 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.835085 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.853258 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.867655 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.890210 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:27Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:48:27.159472 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:48:27.159570 6156 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 06:48:27.159593 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 06:48:27.159598 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 06:48:27.159624 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:48:27.159640 6156 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:48:27.159645 6156 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:48:27.159659 6156 factory.go:656] Stopping watch factory\\\\nI1128 06:48:27.159676 6156 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:48:27.159720 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 06:48:27.159725 6156 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:48:27.159735 6156 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1128 06:48:27.159737 6156 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 06:48:27.159744 6156 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 06:48:27.159752 6156 handler.go:208] Removed *v1.Namespace event handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.907759 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.920928 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.921956 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.922047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.922073 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.922108 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:27 crc kubenswrapper[4889]: I1128 06:48:27.922136 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:27Z","lastTransitionTime":"2025-11-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.025913 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.025969 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.025982 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.026006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.026019 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.129279 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.129330 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.129343 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.129364 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.129378 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.231641 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.231693 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.231712 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.231729 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.231748 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.331012 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.331040 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:28 crc kubenswrapper[4889]: E1128 06:48:28.331159 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:28 crc kubenswrapper[4889]: E1128 06:48:28.331320 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.333535 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.333565 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.333577 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.333589 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.333600 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.435636 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.435682 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.435696 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.435731 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.435743 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.537808 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.537853 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.537865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.537885 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.537897 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.632795 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/0.log" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.635895 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.636018 4889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.639951 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.639979 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.639988 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.640006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.640017 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.658454 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.673660 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.686320 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.698580 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.710752 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.725747 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.739008 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.743323 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.743392 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.743405 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.743424 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.743436 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.751505 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.768824 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.782698 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.794700 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.805850 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.819674 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.839070 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:27Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:48:27.159472 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:48:27.159570 6156 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 06:48:27.159593 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 06:48:27.159598 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 06:48:27.159624 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:48:27.159640 6156 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:48:27.159645 6156 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:48:27.159659 6156 factory.go:656] Stopping watch factory\\\\nI1128 06:48:27.159676 6156 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:48:27.159720 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 06:48:27.159725 6156 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:48:27.159735 6156 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1128 06:48:27.159737 6156 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 06:48:27.159744 6156 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 06:48:27.159752 6156 handler.go:208] Removed *v1.Namespace event handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.846616 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.846667 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.846676 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.846692 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.846725 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.949337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.949372 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.949383 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.949400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:28 crc kubenswrapper[4889]: I1128 06:48:28.949412 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:28Z","lastTransitionTime":"2025-11-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.051469 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.051539 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.051549 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.051567 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.051579 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.101053 4889 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.154172 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.154243 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.154266 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.154295 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.154313 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.257345 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.257383 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.257393 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.257411 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.257422 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.331768 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:29 crc kubenswrapper[4889]: E1128 06:48:29.331971 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.360271 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.360313 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.360322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.360337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.360348 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.463152 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.463193 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.463204 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.463221 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.463233 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.566875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.566952 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.566970 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.566996 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.567014 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.609429 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p"] Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.609952 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.614768 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.614948 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.632980 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.641894 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/1.log" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.642646 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/0.log" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.646545 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec" exitCode=1 Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.646610 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.646697 4889 scope.go:117] "RemoveContainer" containerID="f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.647945 4889 scope.go:117] "RemoveContainer" containerID="92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec" Nov 28 06:48:29 crc kubenswrapper[4889]: E1128 06:48:29.648260 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.656307 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.669555 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.669716 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.669732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.669754 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.669769 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.673018 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.686362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.686410 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.686426 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.686448 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.686465 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.690818 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.708836 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: E1128 06:48:29.709458 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.715112 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.715145 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.715158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.715176 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.715190 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: E1128 06:48:29.730682 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.734980 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.735018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.735031 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.735048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.735061 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.737586 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:27Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:48:27.159472 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:48:27.159570 6156 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 06:48:27.159593 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 06:48:27.159598 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 06:48:27.159624 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:48:27.159640 6156 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:48:27.159645 6156 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:48:27.159659 6156 factory.go:656] Stopping watch factory\\\\nI1128 06:48:27.159676 6156 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:48:27.159720 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 06:48:27.159725 6156 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:48:27.159735 6156 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1128 06:48:27.159737 6156 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 06:48:27.159744 6156 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 06:48:27.159752 6156 handler.go:208] Removed *v1.Namespace event handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: E1128 06:48:29.751459 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.755236 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.755263 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.755275 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.755293 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.755310 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.755981 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: E1128 06:48:29.770284 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.772184 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.774612 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.774638 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.774648 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.774667 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.774680 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.798804 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.804888 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13e49a78-73ea-47f8-8937-49dad3a59ce4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.804929 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxnw\" (UniqueName: \"kubernetes.io/projected/13e49a78-73ea-47f8-8937-49dad3a59ce4-kube-api-access-njxnw\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.804991 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/13e49a78-73ea-47f8-8937-49dad3a59ce4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.805031 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/13e49a78-73ea-47f8-8937-49dad3a59ce4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: E1128 06:48:29.816042 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: E1128 06:48:29.816167 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.818162 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.818195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.818205 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.818223 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.818235 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.843910 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.858987 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.883067 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.898396 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.906632 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/13e49a78-73ea-47f8-8937-49dad3a59ce4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.906679 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/13e49a78-73ea-47f8-8937-49dad3a59ce4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.906786 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13e49a78-73ea-47f8-8937-49dad3a59ce4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.907556 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/13e49a78-73ea-47f8-8937-49dad3a59ce4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.907678 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/13e49a78-73ea-47f8-8937-49dad3a59ce4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.907809 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxnw\" (UniqueName: \"kubernetes.io/projected/13e49a78-73ea-47f8-8937-49dad3a59ce4-kube-api-access-njxnw\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.920933 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.921131 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13e49a78-73ea-47f8-8937-49dad3a59ce4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.927491 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxnw\" (UniqueName: \"kubernetes.io/projected/13e49a78-73ea-47f8-8937-49dad3a59ce4-kube-api-access-njxnw\") pod \"ovnkube-control-plane-749d76644c-kbs8p\" (UID: \"13e49a78-73ea-47f8-8937-49dad3a59ce4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.927819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.927869 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.927890 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.927914 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.927933 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:29Z","lastTransitionTime":"2025-11-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.936941 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.943948 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.957800 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: W1128 06:48:29.957942 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13e49a78_73ea_47f8_8937_49dad3a59ce4.slice/crio-80839dc23e39c19d597c9907ec7c1cf5649da0915470bc7c01881cf1fe044af4 WatchSource:0}: Error finding container 80839dc23e39c19d597c9907ec7c1cf5649da0915470bc7c01881cf1fe044af4: Status 404 returned error can't find the container with id 80839dc23e39c19d597c9907ec7c1cf5649da0915470bc7c01881cf1fe044af4 Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.971975 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:29 crc kubenswrapper[4889]: I1128 06:48:29.992879 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:29Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.009539 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.029396 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.031370 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.031402 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.031414 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.031437 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.031450 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.053211 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.072800 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.088820 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.105066 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.121104 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.135067 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.135108 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.135118 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.135138 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.135152 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.138110 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.166578 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0c919fa64b1e74e7f98c90390f3b91b87e6d21369963b41a4a539d272014dc1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:27Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:48:27.159472 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:48:27.159570 6156 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 06:48:27.159593 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 06:48:27.159598 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 06:48:27.159624 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:48:27.159640 6156 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:48:27.159645 6156 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:48:27.159659 6156 factory.go:656] Stopping watch factory\\\\nI1128 06:48:27.159676 6156 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:48:27.159720 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 06:48:27.159725 6156 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:48:27.159735 6156 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1128 06:48:27.159737 6156 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 06:48:27.159744 6156 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 06:48:27.159752 6156 handler.go:208] Removed *v1.Namespace event handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.185076 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.201925 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.216076 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.238449 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.238575 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.238598 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.238686 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.238809 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.331534 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.331534 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:30 crc kubenswrapper[4889]: E1128 06:48:30.331813 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:30 crc kubenswrapper[4889]: E1128 06:48:30.332148 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.343557 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.343641 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.343662 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.343693 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.343789 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.447124 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.447169 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.447183 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.447212 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.447228 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.550566 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.550643 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.550665 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.550698 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.550786 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.653813 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.653904 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.653927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.653964 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.653987 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.655172 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/1.log" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.661775 4889 scope.go:117] "RemoveContainer" containerID="92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec" Nov 28 06:48:30 crc kubenswrapper[4889]: E1128 06:48:30.662107 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.662483 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" event={"ID":"13e49a78-73ea-47f8-8937-49dad3a59ce4","Type":"ContainerStarted","Data":"80839dc23e39c19d597c9907ec7c1cf5649da0915470bc7c01881cf1fe044af4"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.686315 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.711880 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.735105 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.756317 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.757967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.758015 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.758036 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.758107 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.758947 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.780160 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.823820 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.849429 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.862118 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.862174 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.862196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.862233 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.862254 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.876848 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.893418 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.912397 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.931670 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.954032 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.964727 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.964779 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.964792 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.964811 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.964824 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:30Z","lastTransitionTime":"2025-11-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.977867 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:30 crc kubenswrapper[4889]: I1128 06:48:30.998219 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:30Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.022472 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.068277 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.068584 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.068669 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.068795 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.068931 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.148409 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mbrtc"] Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.148978 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:31 crc kubenswrapper[4889]: E1128 06:48:31.149053 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.172412 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.172481 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.172501 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.172529 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.172548 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.173786 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.195686 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.217537 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.223526 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.223614 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxfbz\" (UniqueName: \"kubernetes.io/projected/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-kube-api-access-vxfbz\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.238388 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.254104 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.271055 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.275404 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.275457 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.275468 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.275493 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.275506 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.297701 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.317994 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.324467 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxfbz\" (UniqueName: \"kubernetes.io/projected/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-kube-api-access-vxfbz\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.324782 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:31 crc kubenswrapper[4889]: E1128 06:48:31.325007 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:31 crc kubenswrapper[4889]: E1128 06:48:31.325128 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs podName:e209e335-9f44-41a8-a8f2-093d2bdcfe6b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:31.825093946 +0000 UTC m=+34.795328131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs") pod "network-metrics-daemon-mbrtc" (UID: "e209e335-9f44-41a8-a8f2-093d2bdcfe6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.331013 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:31 crc kubenswrapper[4889]: E1128 06:48:31.331234 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.346229 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.354326 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxfbz\" (UniqueName: \"kubernetes.io/projected/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-kube-api-access-vxfbz\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.363262 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.378687 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.378767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.378786 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.378816 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.378835 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.380670 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.397098 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.421485 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.441347 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.465334 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.483001 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.483061 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.483082 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.483117 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.483137 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.488459 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:31Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.587162 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.587240 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.587258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.587289 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.587309 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.690287 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.690362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.690388 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.690426 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.690451 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.794158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.794251 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.794300 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.794337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.794444 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.829665 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:31 crc kubenswrapper[4889]: E1128 06:48:31.829956 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:31 crc kubenswrapper[4889]: E1128 06:48:31.830093 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs podName:e209e335-9f44-41a8-a8f2-093d2bdcfe6b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:32.830056423 +0000 UTC m=+35.800290618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs") pod "network-metrics-daemon-mbrtc" (UID: "e209e335-9f44-41a8-a8f2-093d2bdcfe6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.898220 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.898271 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.898285 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.898308 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:31 crc kubenswrapper[4889]: I1128 06:48:31.898325 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:31Z","lastTransitionTime":"2025-11-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.002015 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.002526 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.002653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.002821 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.002984 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.032064 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.032400 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:48.032371131 +0000 UTC m=+51.002605326 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.106059 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.106624 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.106643 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.106671 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.106689 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.133022 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.133098 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.133152 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.133191 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133308 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133354 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133395 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133426 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:48.133397281 +0000 UTC m=+51.103631476 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133429 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133455 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133464 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:48.133446012 +0000 UTC m=+51.103680197 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133476 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133562 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133590 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133517 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:48.133496674 +0000 UTC m=+51.103730859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.133779 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:48:48.13374093 +0000 UTC m=+51.103975125 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.212830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.212901 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.212922 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.212948 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.212966 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.315995 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.316070 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.316092 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.316226 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.316269 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.331123 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.331198 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.331296 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.331474 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.331137 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.331605 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.419140 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.419196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.419211 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.419234 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.419250 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.521529 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.521616 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.521630 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.521656 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.521671 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.624632 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.624732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.624751 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.624812 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.624832 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.672568 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" event={"ID":"13e49a78-73ea-47f8-8937-49dad3a59ce4","Type":"ContainerStarted","Data":"5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.672646 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" event={"ID":"13e49a78-73ea-47f8-8937-49dad3a59ce4","Type":"ContainerStarted","Data":"a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.691473 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.710023 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.728255 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.728348 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.728377 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.728414 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.728435 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.730347 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.747450 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.770128 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.789409 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.815179 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.832416 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.832499 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.832552 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.832576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.832601 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.835607 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.844348 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.844547 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: E1128 06:48:32.844634 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs podName:e209e335-9f44-41a8-a8f2-093d2bdcfe6b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:34.844611284 +0000 UTC m=+37.814845449 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs") pod "network-metrics-daemon-mbrtc" (UID: "e209e335-9f44-41a8-a8f2-093d2bdcfe6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.857337 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.879097 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.899922 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.920813 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.936582 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.936666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.936692 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.936752 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.936775 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:32Z","lastTransitionTime":"2025-11-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.941642 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.960030 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:32 crc kubenswrapper[4889]: I1128 06:48:32.991107 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.012809 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.040771 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.041111 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.041261 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.041404 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.041556 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.145381 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.145998 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.146239 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.146427 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.146600 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.250316 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.250894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.251034 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.251173 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.251296 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.331795 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:33 crc kubenswrapper[4889]: E1128 06:48:33.332898 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.355319 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.355384 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.355403 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.355431 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.355453 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.458330 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.458438 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.458457 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.458483 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.458514 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.562379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.562443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.562461 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.562493 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.562529 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.664923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.664998 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.665024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.665059 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.665083 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.768176 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.768211 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.768221 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.768236 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.768248 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.871838 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.871890 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.871900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.871916 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.871926 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.975545 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.975613 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.975636 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.975665 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:33 crc kubenswrapper[4889]: I1128 06:48:33.975686 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:33Z","lastTransitionTime":"2025-11-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.078815 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.078888 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.078908 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.078940 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.078961 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.182309 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.182688 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.182882 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.183025 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.183157 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.287054 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.287135 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.287156 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.287193 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.287212 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.331381 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.331583 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.331582 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:34 crc kubenswrapper[4889]: E1128 06:48:34.332100 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:34 crc kubenswrapper[4889]: E1128 06:48:34.332228 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:34 crc kubenswrapper[4889]: E1128 06:48:34.332326 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.391455 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.391978 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.392113 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.392313 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.392455 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.443644 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.469881 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.496137 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.496641 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.496920 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.497082 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.497239 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.500013 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.525966 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.547750 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.566125 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.588206 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.601540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.601821 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.601850 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.601877 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.601899 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.613777 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.638340 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.663352 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.690599 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.705460 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.705567 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.705590 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.705619 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.705638 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.711080 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.729151 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.770914 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.790262 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.809168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.809417 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.809500 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.809601 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.809685 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.812018 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.827682 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.870764 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:34 crc kubenswrapper[4889]: E1128 06:48:34.871056 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:34 crc kubenswrapper[4889]: E1128 06:48:34.871229 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs podName:e209e335-9f44-41a8-a8f2-093d2bdcfe6b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:38.871206074 +0000 UTC m=+41.841440239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs") pod "network-metrics-daemon-mbrtc" (UID: "e209e335-9f44-41a8-a8f2-093d2bdcfe6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.913206 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.913271 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.913291 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.913333 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:34 crc kubenswrapper[4889]: I1128 06:48:34.913358 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:34Z","lastTransitionTime":"2025-11-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.017254 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.017337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.017361 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.017401 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.017421 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.120977 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.121038 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.121065 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.121099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.121118 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.223966 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.224028 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.224056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.224090 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.224115 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.327177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.327561 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.327652 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.327770 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.327878 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.331781 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:35 crc kubenswrapper[4889]: E1128 06:48:35.331990 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.431197 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.431307 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.431328 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.431362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.431387 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.535011 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.535177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.535205 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.535232 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.535252 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.638048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.638135 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.638164 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.638200 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.638226 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.741746 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.741837 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.741851 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.741878 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.741894 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.845284 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.845336 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.845348 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.845370 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.845386 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.948535 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.948606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.948626 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.948744 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:35 crc kubenswrapper[4889]: I1128 06:48:35.948766 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:35Z","lastTransitionTime":"2025-11-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.053061 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.053128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.053148 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.053177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.053196 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.158606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.158671 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.158683 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.158803 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.158829 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.262061 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.262220 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.262239 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.262269 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.262293 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.331494 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.331587 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.331517 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:36 crc kubenswrapper[4889]: E1128 06:48:36.331813 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:36 crc kubenswrapper[4889]: E1128 06:48:36.332003 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:36 crc kubenswrapper[4889]: E1128 06:48:36.332279 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.366224 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.366292 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.366315 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.366344 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.366364 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.469443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.469526 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.469550 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.469579 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.469604 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.573840 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.573917 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.573935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.573967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.573988 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.678411 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.678488 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.678504 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.678528 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.678545 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.783050 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.783131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.783151 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.783182 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.783204 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.886521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.886594 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.886611 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.886641 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.886661 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.990009 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.990074 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.990092 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.990120 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:36 crc kubenswrapper[4889]: I1128 06:48:36.990139 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:36Z","lastTransitionTime":"2025-11-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.093774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.093855 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.093879 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.093913 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.093932 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.198069 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.198128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.198147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.198180 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.198200 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.301182 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.301238 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.301249 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.301275 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.301286 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.331491 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:37 crc kubenswrapper[4889]: E1128 06:48:37.331728 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.355236 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.376731 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.398584 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.404222 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.404297 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.404317 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.404349 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.404369 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.418253 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.441057 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.464249 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.488224 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.509071 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.509283 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.509314 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.509452 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.509487 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.511031 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.532782 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.552054 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.577039 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.612811 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.612932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.612959 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.613000 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.613026 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.622021 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.644555 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.663893 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.682505 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.711006 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.716559 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.716678 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.716696 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.716768 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.716788 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.819735 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.819822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.819849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.819886 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.819912 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.922849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.922911 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.922933 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.922961 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:37 crc kubenswrapper[4889]: I1128 06:48:37.922983 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:37Z","lastTransitionTime":"2025-11-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.026047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.026124 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.026142 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.026176 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.026200 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.131306 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.131368 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.131387 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.131415 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.131433 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.235925 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.235997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.236029 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.236064 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.236089 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.331304 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.331399 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:38 crc kubenswrapper[4889]: E1128 06:48:38.331520 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.331398 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:38 crc kubenswrapper[4889]: E1128 06:48:38.331653 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:38 crc kubenswrapper[4889]: E1128 06:48:38.331832 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.341747 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.341811 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.341832 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.341864 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.341886 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.445153 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.445228 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.445246 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.445274 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.445294 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.548685 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.548780 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.548834 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.548861 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.548879 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.652559 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.652635 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.652655 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.652690 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.652770 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.756498 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.756583 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.756601 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.756631 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.756651 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.860604 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.860691 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.860751 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.860787 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.860843 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.927492 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:38 crc kubenswrapper[4889]: E1128 06:48:38.927697 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:38 crc kubenswrapper[4889]: E1128 06:48:38.927819 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs podName:e209e335-9f44-41a8-a8f2-093d2bdcfe6b nodeName:}" failed. No retries permitted until 2025-11-28 06:48:46.927797568 +0000 UTC m=+49.898031753 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs") pod "network-metrics-daemon-mbrtc" (UID: "e209e335-9f44-41a8-a8f2-093d2bdcfe6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.964445 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.964537 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.964560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.964594 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:38 crc kubenswrapper[4889]: I1128 06:48:38.964613 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:38Z","lastTransitionTime":"2025-11-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.068418 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.068493 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.068516 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.068545 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.068564 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.171941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.172028 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.172046 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.172074 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.172095 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.276361 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.276432 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.276451 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.276480 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.276501 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.330899 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:39 crc kubenswrapper[4889]: E1128 06:48:39.331095 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.379055 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.379125 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.379144 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.379172 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.379216 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.483862 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.483928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.483947 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.483977 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.484001 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.588436 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.588508 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.588525 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.588557 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.588576 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.692096 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.692183 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.692208 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.692247 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.692270 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.795943 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.796017 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.796043 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.796078 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.796103 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.899813 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.899875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.899896 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.899921 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.899942 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.918099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.918168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.918193 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.918221 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.918243 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: E1128 06:48:39.940533 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.946820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.946876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.946893 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.946920 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.946936 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: E1128 06:48:39.968649 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.974955 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.975033 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.975051 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.975081 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:39 crc kubenswrapper[4889]: I1128 06:48:39.975103 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:39Z","lastTransitionTime":"2025-11-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:39 crc kubenswrapper[4889]: E1128 06:48:39.995475 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.001036 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.001099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.001119 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.001147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.001167 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: E1128 06:48:40.019612 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.024312 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.024366 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.024381 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.024400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.024412 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: E1128 06:48:40.043384 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:40 crc kubenswrapper[4889]: E1128 06:48:40.043536 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.045479 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.045556 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.045581 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.045619 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.045648 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.149219 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.149295 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.149313 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.149341 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.149360 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.253396 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.253482 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.253507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.253543 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.253564 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.331894 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.331981 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.331998 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:40 crc kubenswrapper[4889]: E1128 06:48:40.332123 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:40 crc kubenswrapper[4889]: E1128 06:48:40.332230 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:40 crc kubenswrapper[4889]: E1128 06:48:40.332402 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.358006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.358115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.358139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.358172 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.358195 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.463596 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.463644 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.463658 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.463680 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.463694 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.566774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.566822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.566839 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.566858 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.566871 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.670026 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.670081 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.670094 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.670118 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.670132 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.774435 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.774507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.774526 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.774554 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.774573 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.879128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.879202 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.879219 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.879248 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.879269 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.983865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.983935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.983955 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.983985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:40 crc kubenswrapper[4889]: I1128 06:48:40.984005 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:40Z","lastTransitionTime":"2025-11-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.089969 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.090611 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.090645 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.090680 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.090732 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.194685 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.194826 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.194848 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.194880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.194901 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.299033 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.299101 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.299120 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.299152 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.299178 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.331829 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:41 crc kubenswrapper[4889]: E1128 06:48:41.332091 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.402168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.402216 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.402230 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.402250 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.402265 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.506259 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.506326 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.506340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.506363 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.506377 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.611667 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.611745 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.611758 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.611781 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.611794 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.715148 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.715428 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.715507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.715631 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.715667 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.818469 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.818549 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.818567 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.818598 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.818618 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.921476 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.921544 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.921561 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.921585 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.921601 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:41Z","lastTransitionTime":"2025-11-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.929764 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:41 crc kubenswrapper[4889]: I1128 06:48:41.931383 4889 scope.go:117] "RemoveContainer" containerID="92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.025335 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.025422 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.025444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.025505 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.025527 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.128055 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.128137 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.128189 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.128218 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.128237 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.230835 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.230944 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.230974 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.231018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.231046 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.331370 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:42 crc kubenswrapper[4889]: E1128 06:48:42.331560 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.331616 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.331645 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:42 crc kubenswrapper[4889]: E1128 06:48:42.331951 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:42 crc kubenswrapper[4889]: E1128 06:48:42.332292 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.336776 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.336847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.336860 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.336915 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.336927 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.440325 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.440372 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.440392 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.440421 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.440439 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.543092 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.543151 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.543166 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.543186 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.543201 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.646083 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.646134 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.646144 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.646160 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.646172 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.725567 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/1.log" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.729316 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.730272 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.749452 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.749518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.749540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.749572 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.749595 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.752486 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.786491 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.803908 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.823025 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.838990 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.852392 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.852495 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.852508 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.852524 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.852535 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.852974 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.870044 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.886805 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.901929 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.916825 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.932371 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.948625 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.955379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.955449 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.955467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.955523 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.955544 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:42Z","lastTransitionTime":"2025-11-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.973037 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:42 crc kubenswrapper[4889]: I1128 06:48:42.985744 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.006821 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.019656 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.058721 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.058778 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.058788 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.058804 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.058818 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.161336 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.161405 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.161419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.161443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.161458 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.267734 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.267806 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.267821 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.267847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.267861 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.330738 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:43 crc kubenswrapper[4889]: E1128 06:48:43.330929 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.371310 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.371378 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.371401 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.371430 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.371455 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.474316 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.474376 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.474384 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.474404 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.474416 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.577901 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.577968 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.577986 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.578018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.578037 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.681310 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.681377 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.681395 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.681420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.681440 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.735542 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/2.log" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.736968 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/1.log" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.740602 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb" exitCode=1 Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.740662 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.740787 4889 scope.go:117] "RemoveContainer" containerID="92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.742115 4889 scope.go:117] "RemoveContainer" containerID="118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb" Nov 28 06:48:43 crc kubenswrapper[4889]: E1128 06:48:43.742388 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.777556 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.785162 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.785217 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.785238 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.785268 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.785290 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.797569 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.819157 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.840053 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.862083 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.877694 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.888872 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.888921 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.888937 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.888960 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.888977 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.893319 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.908526 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.927737 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.942533 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.959356 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.982559 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.992352 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.992404 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.992418 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.992443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:43 crc kubenswrapper[4889]: I1128 06:48:43.992459 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:43Z","lastTransitionTime":"2025-11-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.003797 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.022088 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.037634 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.074197 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.096131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.096183 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.096195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.096214 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.096228 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.202402 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.202465 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.202478 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.202499 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.202513 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.306015 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.306115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.306134 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.306162 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.306179 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.331570 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.331645 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.331603 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:44 crc kubenswrapper[4889]: E1128 06:48:44.331839 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:44 crc kubenswrapper[4889]: E1128 06:48:44.331959 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:44 crc kubenswrapper[4889]: E1128 06:48:44.332166 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.410104 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.410290 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.410360 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.410438 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.410465 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.514235 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.514318 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.514340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.514369 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.514394 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.617659 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.617776 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.617799 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.617833 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.617852 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.682670 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.693836 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.702157 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.720452 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.720479 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.720490 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.720504 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.720513 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.727577 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.746892 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/2.log" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.749855 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.754057 4889 scope.go:117] "RemoveContainer" containerID="118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb" Nov 28 06:48:44 crc kubenswrapper[4889]: E1128 06:48:44.754279 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.772875 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.797100 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.816852 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.823485 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.823555 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.823576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.823606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.823626 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.838415 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.869842 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a9867eb14055c777eacd4cb62e59335abe2cf43dd073b13ac63907fd7303ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"message\\\":\\\"ices_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 06:48:28.519415 6284 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:48:28.519526 6284 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.890881 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.909355 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.926336 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.926521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.926551 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.926585 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.926611 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:44Z","lastTransitionTime":"2025-11-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.930539 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.948361 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.967935 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:44 crc kubenswrapper[4889]: I1128 06:48:44.988327 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.004238 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.022868 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.030358 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.030420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.030440 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.030465 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.030483 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.044090 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.069040 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.084784 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.105139 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.129270 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.134091 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.134158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.134180 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.134215 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.134236 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.150820 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.167508 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.182869 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.209858 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.232682 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.237525 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.237574 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.237588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.237610 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.237666 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.248631 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.268213 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.286303 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.310344 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.328950 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.331507 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:45 crc kubenswrapper[4889]: E1128 06:48:45.331896 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.340401 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.340437 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.340452 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.340468 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.340522 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.346949 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.369618 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.443416 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.443457 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.443471 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.443489 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.443501 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.546491 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.546545 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.546561 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.546584 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.546604 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.649642 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.649763 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.649790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.649830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.649856 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.754085 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.754167 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.754196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.754230 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.754257 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.857032 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.857084 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.857097 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.857118 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.857132 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.960477 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.960525 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.960541 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.960563 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:45 crc kubenswrapper[4889]: I1128 06:48:45.960627 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:45Z","lastTransitionTime":"2025-11-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.064231 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.064298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.064317 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.064342 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.064359 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.168175 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.168263 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.168290 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.168324 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.168351 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.272275 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.272340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.272362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.272399 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.272421 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.331462 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.331522 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.331462 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:46 crc kubenswrapper[4889]: E1128 06:48:46.331678 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:46 crc kubenswrapper[4889]: E1128 06:48:46.331900 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:46 crc kubenswrapper[4889]: E1128 06:48:46.332112 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.375962 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.376024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.376040 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.376065 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.376086 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.478985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.479044 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.479062 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.479087 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.479106 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.582780 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.582832 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.582849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.582871 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.582888 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.685169 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.685231 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.685255 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.685287 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.685312 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.791585 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.791662 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.791682 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.791754 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.791796 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.895683 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.895778 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.895800 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.895828 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.895850 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:46Z","lastTransitionTime":"2025-11-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:46 crc kubenswrapper[4889]: I1128 06:48:46.935569 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:46 crc kubenswrapper[4889]: E1128 06:48:46.935933 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:46 crc kubenswrapper[4889]: E1128 06:48:46.936116 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs podName:e209e335-9f44-41a8-a8f2-093d2bdcfe6b nodeName:}" failed. No retries permitted until 2025-11-28 06:49:02.936075679 +0000 UTC m=+65.906309874 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs") pod "network-metrics-daemon-mbrtc" (UID: "e209e335-9f44-41a8-a8f2-093d2bdcfe6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.000068 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.000150 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.000169 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.000199 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.000219 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.103577 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.103650 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.103676 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.103752 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.103781 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.208759 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.208838 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.208863 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.208897 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.208924 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.312683 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.312881 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.312900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.312941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.312974 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.331328 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:47 crc kubenswrapper[4889]: E1128 06:48:47.331581 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.358514 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.378891 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.399908 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.417525 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.417593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.417617 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.417645 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.417667 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.421802 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.447696 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.473638 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.492990 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.515259 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.520595 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.520656 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.520675 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.520702 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.520757 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.539447 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.558655 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.579663 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.604825 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.624743 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.624821 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.624841 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.624869 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.624889 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.625192 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.644375 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.662154 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.690925 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.706644 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.727361 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.727433 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.727448 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.727472 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.727495 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.830248 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.830302 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.830326 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.830356 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.830378 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.934342 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.934420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.934440 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.934467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:47 crc kubenswrapper[4889]: I1128 06:48:47.934488 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:47Z","lastTransitionTime":"2025-11-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.037219 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.037286 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.037303 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.037330 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.037349 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.049716 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.050023 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:49:20.049998898 +0000 UTC m=+83.020233053 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.142056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.142117 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.142131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.142158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.142173 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.150946 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.151022 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.151093 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.151136 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.151329 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.151358 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.151379 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.151447 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:49:20.151425508 +0000 UTC m=+83.121659673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.151793 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.151879 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:49:20.151854339 +0000 UTC m=+83.122088504 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.152000 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.152047 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.152072 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.152183 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:49:20.152143617 +0000 UTC m=+83.122377822 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.152398 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.152495 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:49:20.152472765 +0000 UTC m=+83.122706960 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.245401 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.245468 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.245482 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.245507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.245526 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.331786 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.331854 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.331854 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.331995 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.332112 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:48 crc kubenswrapper[4889]: E1128 06:48:48.332208 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.348464 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.348520 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.348540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.348566 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.348584 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.450930 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.451016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.451035 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.451064 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.451085 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.553702 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.553823 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.553846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.553879 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.553901 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.657667 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.657797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.657828 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.657869 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.657897 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.760199 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.760247 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.760258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.760277 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.760287 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.864298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.864373 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.864388 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.864414 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.864433 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.967149 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.967249 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.967275 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.967311 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:48 crc kubenswrapper[4889]: I1128 06:48:48.967333 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:48Z","lastTransitionTime":"2025-11-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.069528 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.069573 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.069587 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.069614 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.069629 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.171864 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.171934 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.171955 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.171982 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.172004 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.274496 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.274566 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.274585 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.274613 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.274695 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.331451 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:49 crc kubenswrapper[4889]: E1128 06:48:49.331632 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.376606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.376693 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.376756 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.376790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.376814 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.480773 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.480859 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.480879 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.480909 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:49 crc kubenswrapper[4889]: I1128 06:48:49.480938 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.585154 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.585266 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.585288 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.585354 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.585371 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.689252 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.689312 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.689327 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.689353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.689367 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.792790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.792842 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.792861 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.792889 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.792910 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.895812 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.895882 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.895899 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.895927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.895950 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.998337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.998399 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.998416 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.998444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:49.998463 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:49Z","lastTransitionTime":"2025-11-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.101524 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.101594 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.101612 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.101637 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.101657 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.205514 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.205592 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.205613 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.205675 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.205697 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.309236 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.309301 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.309322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.309348 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.309367 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.331643 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.331750 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.331791 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.331949 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.332215 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.371899 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.412724 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.412771 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.412781 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.412800 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.412812 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.424300 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.424354 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.424367 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.424389 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.424404 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.445493 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.450118 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.450194 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.450210 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.450230 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.450269 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.471182 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.476076 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.476128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.476266 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.476298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.476315 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.496694 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.501176 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.501223 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.501269 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.501296 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.501314 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.522254 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.527034 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.527133 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.527150 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.527177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.527196 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.546491 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:50 crc kubenswrapper[4889]: E1128 06:48:50.546697 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.549259 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.549296 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.549313 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.549363 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.549380 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.652867 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.652935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.652962 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.652996 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.653022 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.756551 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.756590 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.756599 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.756615 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.756626 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.859187 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.859240 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.859250 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.859268 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.859279 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.962060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.962099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.962111 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.962130 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:50 crc kubenswrapper[4889]: I1128 06:48:50.962141 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:50Z","lastTransitionTime":"2025-11-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.066082 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.066147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.066169 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.066195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.066218 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.168934 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.168977 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.168987 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.169006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.169017 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.271905 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.271993 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.272019 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.272054 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.272082 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.331052 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:51 crc kubenswrapper[4889]: E1128 06:48:51.331195 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.374498 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.374560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.374820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.375112 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.375132 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.478439 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.478513 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.478537 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.478570 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.478596 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.580947 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.580975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.580984 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.580999 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.581008 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.683663 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.683732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.683748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.683774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.683790 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.786396 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.786466 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.786481 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.786561 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.786574 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.889745 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.889841 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.889865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.889891 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.889908 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.992863 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.992912 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.992928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.992950 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:51 crc kubenswrapper[4889]: I1128 06:48:51.992966 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:51Z","lastTransitionTime":"2025-11-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.095951 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.096014 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.096030 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.096051 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.096064 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.198085 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.198134 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.198150 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.198169 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.198183 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.300741 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.300837 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.300855 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.300873 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.300888 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.331138 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.331170 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.331179 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:52 crc kubenswrapper[4889]: E1128 06:48:52.331284 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:52 crc kubenswrapper[4889]: E1128 06:48:52.331458 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:52 crc kubenswrapper[4889]: E1128 06:48:52.331518 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.402739 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.402784 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.402798 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.402814 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.402825 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.504889 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.504932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.504941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.504958 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.504971 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.607694 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.607797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.607811 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.607830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.607846 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.710757 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.710793 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.710804 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.710819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.710830 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.812994 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.813051 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.813068 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.813091 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.813108 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.916161 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.916206 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.916217 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.916235 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:52 crc kubenswrapper[4889]: I1128 06:48:52.916250 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:52Z","lastTransitionTime":"2025-11-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.018754 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.018823 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.018847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.018880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.018908 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.122034 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.122112 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.122137 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.122166 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.122185 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.225679 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.225809 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.225832 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.225861 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.225881 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.329808 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.329872 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.329896 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.329928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.329954 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.330879 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:53 crc kubenswrapper[4889]: E1128 06:48:53.331052 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.432959 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.433018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.433041 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.433065 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.433083 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.535661 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.535746 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.535764 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.535790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.535814 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.638433 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.638522 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.638550 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.638581 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.638599 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.742131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.742185 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.742215 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.742243 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.742263 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.844560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.844600 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.844610 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.844627 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.844638 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.948242 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.948318 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.948339 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.948365 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:53 crc kubenswrapper[4889]: I1128 06:48:53.948389 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:53Z","lastTransitionTime":"2025-11-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.051264 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.052124 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.052149 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.052182 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.052204 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.156026 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.156111 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.156143 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.156175 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.156196 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.259339 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.259413 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.259437 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.259467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.259490 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.331594 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.331613 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:54 crc kubenswrapper[4889]: E1128 06:48:54.331860 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.331613 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:54 crc kubenswrapper[4889]: E1128 06:48:54.331978 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:54 crc kubenswrapper[4889]: E1128 06:48:54.332034 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.363259 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.363327 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.363345 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.363373 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.363395 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.468343 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.468467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.468517 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.468547 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.468600 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.571753 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.571830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.571867 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.571891 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.571903 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.674608 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.674688 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.674746 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.674777 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.674801 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.778476 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.778542 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.778560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.778587 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.778606 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.882752 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.882845 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.882897 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.882925 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.882945 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.985863 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.985936 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.985961 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.985988 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:54 crc kubenswrapper[4889]: I1128 06:48:54.986007 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:54Z","lastTransitionTime":"2025-11-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.089948 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.090047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.090073 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.090103 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.090126 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.193859 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.193935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.193953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.193982 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.194004 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.297543 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.297614 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.297639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.297672 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.297698 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.331540 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:55 crc kubenswrapper[4889]: E1128 06:48:55.331796 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.401662 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.401789 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.401817 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.401843 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.401863 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.505894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.505960 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.505977 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.506006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.506028 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.609063 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.609137 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.609156 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.609183 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.609202 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.713880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.713945 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.713963 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.713991 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.714009 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.817146 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.817207 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.817227 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.817253 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.817270 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.920253 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.920352 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.920377 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.920408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:55 crc kubenswrapper[4889]: I1128 06:48:55.920430 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:55Z","lastTransitionTime":"2025-11-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.024303 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.024394 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.024414 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.024449 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.024474 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.127630 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.127677 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.127689 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.127743 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.127757 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.230575 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.230645 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.230660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.230682 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.230695 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.331400 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.331429 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:56 crc kubenswrapper[4889]: E1128 06:48:56.331561 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.331655 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:56 crc kubenswrapper[4889]: E1128 06:48:56.331780 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:56 crc kubenswrapper[4889]: E1128 06:48:56.332076 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.333497 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.333562 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.333584 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.333614 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.333678 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.437529 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.437595 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.437607 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.437629 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.437645 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.542588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.542669 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.542689 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.542747 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.542767 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.646624 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.646741 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.646766 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.646794 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.646816 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.749074 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.749113 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.749152 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.749167 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.749177 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.851813 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.851893 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.851911 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.851937 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.851957 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.955240 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.955296 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.955320 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.955350 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:56 crc kubenswrapper[4889]: I1128 06:48:56.955375 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:56Z","lastTransitionTime":"2025-11-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.101349 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.101401 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.101411 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.101428 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.101439 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.204545 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.204593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.204606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.204623 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.204636 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.307268 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.307302 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.307311 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.307327 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.307340 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.331653 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:57 crc kubenswrapper[4889]: E1128 06:48:57.331964 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.333271 4889 scope.go:117] "RemoveContainer" containerID="118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb" Nov 28 06:48:57 crc kubenswrapper[4889]: E1128 06:48:57.334134 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.351409 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.374745 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.389970 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.404231 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.409935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.409995 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.410009 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.410032 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.410047 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.418029 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.435364 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.450435 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.466603 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.486681 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.499996 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.514205 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.515292 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.515332 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.515343 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.515364 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.515377 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.530045 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.544460 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.566481 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.583976 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.604671 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.619993 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:48:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.621069 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.621106 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.621123 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.621146 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.621160 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.724213 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.724272 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.724292 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.724322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.724345 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.828785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.828846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.828862 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.828889 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.828903 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.932347 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.932447 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.932468 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.932505 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:57 crc kubenswrapper[4889]: I1128 06:48:57.932541 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:57Z","lastTransitionTime":"2025-11-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.035811 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.035849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.035860 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.035880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.035892 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.139013 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.139079 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.139103 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.139138 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.139163 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.244138 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.244575 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.244587 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.244604 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.244617 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.331411 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.331497 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:48:58 crc kubenswrapper[4889]: E1128 06:48:58.331630 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:48:58 crc kubenswrapper[4889]: E1128 06:48:58.331869 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.331969 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:48:58 crc kubenswrapper[4889]: E1128 06:48:58.332113 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.346679 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.346726 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.346739 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.346753 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.346763 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.450769 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.450846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.450865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.450894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.450914 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.554006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.554048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.554063 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.554080 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.554108 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.656060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.656099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.656112 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.656129 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.656141 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.758580 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.758623 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.758635 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.758653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.758665 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.860876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.860941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.860967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.861000 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.861024 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.963591 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.963631 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.963645 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.963663 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:58 crc kubenswrapper[4889]: I1128 06:48:58.963676 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:58Z","lastTransitionTime":"2025-11-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.066816 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.066857 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.066870 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.066890 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.066904 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.169469 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.169525 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.169540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.169558 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.169571 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.272376 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.272439 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.272458 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.272486 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.272504 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.331519 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:48:59 crc kubenswrapper[4889]: E1128 06:48:59.331805 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.376006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.376073 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.376092 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.376117 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.376137 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.478744 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.478813 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.478833 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.478863 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.478883 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.582299 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.582382 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.582399 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.582447 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.582462 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.685170 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.685237 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.685253 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.685281 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.685297 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.788699 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.788785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.788798 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.788818 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.788833 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.891640 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.891691 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.891721 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.891742 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.891757 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.993933 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.993996 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.994011 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.994028 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:48:59 crc kubenswrapper[4889]: I1128 06:48:59.994040 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:48:59Z","lastTransitionTime":"2025-11-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.097252 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.097309 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.097322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.097341 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.097355 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.199887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.199932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.199941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.199958 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.199976 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.308174 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.308245 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.308264 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.308293 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.308313 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.331669 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.331774 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.331817 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.331688 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.331954 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.332137 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.411865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.411917 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.411927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.411950 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.411965 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.514757 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.514802 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.514812 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.514828 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.514838 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.616812 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.616870 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.616883 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.616904 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.616921 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.679615 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.679664 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.679683 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.679729 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.679750 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.693028 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.698973 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.699100 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.699122 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.699147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.699197 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.721321 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.726838 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.726889 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.726904 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.726928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.726946 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.746511 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.751317 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.751373 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.751387 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.751408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.751424 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.766145 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.771419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.771470 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.771487 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.771505 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.771514 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.785618 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:00 crc kubenswrapper[4889]: E1128 06:49:00.785767 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.787601 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.787651 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.787664 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.787685 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.787698 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.891047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.891096 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.891108 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.891128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.891139 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.994939 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.995052 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.995073 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.995136 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:00 crc kubenswrapper[4889]: I1128 06:49:00.995157 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:00Z","lastTransitionTime":"2025-11-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.100479 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.100583 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.100602 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.100773 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.100852 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.203797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.203875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.203910 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.203953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.203978 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.307514 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.307569 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.307580 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.307598 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.307610 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.330923 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:01 crc kubenswrapper[4889]: E1128 06:49:01.331046 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.410964 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.411054 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.411082 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.411115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.411137 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.515108 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.515171 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.515186 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.515209 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.515225 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.618852 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.618911 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.618923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.618947 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.618960 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.722234 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.722305 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.722325 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.722354 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.722373 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.825566 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.825633 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.825652 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.825681 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.825702 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.928915 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.928978 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.928987 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.929005 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:01 crc kubenswrapper[4889]: I1128 06:49:01.929022 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:01Z","lastTransitionTime":"2025-11-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.031697 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.032158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.032177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.032197 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.032211 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.135505 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.136019 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.136173 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.136340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.136525 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.239224 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.239600 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.239622 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.239647 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.239665 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.331625 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.331787 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:02 crc kubenswrapper[4889]: E1128 06:49:02.331915 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.331943 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:02 crc kubenswrapper[4889]: E1128 06:49:02.332244 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:02 crc kubenswrapper[4889]: E1128 06:49:02.332388 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.342876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.342921 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.342931 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.342967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.342985 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.445774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.445843 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.445868 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.445904 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.445931 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.549581 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.550022 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.550215 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.550487 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.550682 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.654355 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.654419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.654431 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.654455 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.654468 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.757850 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.757914 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.757932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.757956 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.757977 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.860818 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.861311 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.861449 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.861590 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.861772 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.971064 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.971239 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.971261 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.971292 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:02 crc kubenswrapper[4889]: I1128 06:49:02.971314 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:02Z","lastTransitionTime":"2025-11-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.032210 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:03 crc kubenswrapper[4889]: E1128 06:49:03.032392 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:49:03 crc kubenswrapper[4889]: E1128 06:49:03.032449 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs podName:e209e335-9f44-41a8-a8f2-093d2bdcfe6b nodeName:}" failed. No retries permitted until 2025-11-28 06:49:35.032429409 +0000 UTC m=+98.002663564 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs") pod "network-metrics-daemon-mbrtc" (UID: "e209e335-9f44-41a8-a8f2-093d2bdcfe6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.073597 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.073636 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.073648 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.073665 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.073678 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.193453 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.193512 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.193538 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.193563 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.193581 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.296289 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.296327 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.296337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.296353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.296365 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.330977 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:03 crc kubenswrapper[4889]: E1128 06:49:03.331134 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.399115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.399206 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.399253 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.399278 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.399319 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.500897 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.500935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.500944 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.500960 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.500970 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.602950 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.602995 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.603024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.603041 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.603056 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.705145 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.705185 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.705195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.705210 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.705222 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.807858 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.807932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.807942 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.807961 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.807972 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.910607 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.910658 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.910679 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.910725 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:03 crc kubenswrapper[4889]: I1128 06:49:03.910743 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:03Z","lastTransitionTime":"2025-11-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.012915 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.012962 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.012976 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.012995 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.013009 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.115255 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.115351 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.115373 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.115408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.115427 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.218426 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.218492 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.218502 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.218519 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.218533 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.322852 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.322923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.322944 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.322976 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.322996 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.331280 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.331392 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.331407 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:04 crc kubenswrapper[4889]: E1128 06:49:04.331457 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:04 crc kubenswrapper[4889]: E1128 06:49:04.331516 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:04 crc kubenswrapper[4889]: E1128 06:49:04.331625 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.425797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.425875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.425894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.425923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.425950 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.529389 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.529456 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.529470 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.529497 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.529511 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.632405 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.632470 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.632494 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.632524 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.632542 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.737639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.737700 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.737735 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.737759 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.737775 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.818494 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/0.log" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.818595 4889 generic.go:334] "Generic (PLEG): container finished" podID="68ddfdcf-000e-45ae-a737-d3dd28115d5b" containerID="c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c" exitCode=1 Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.818730 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vtjm7" event={"ID":"68ddfdcf-000e-45ae-a737-d3dd28115d5b","Type":"ContainerDied","Data":"c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.820129 4889 scope.go:117] "RemoveContainer" containerID="c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.842400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.842437 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.842454 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.842480 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.842498 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.845692 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.862052 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.876792 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.896271 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"2025-11-28T06:48:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948\\\\n2025-11-28T06:48:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948 to /host/opt/cni/bin/\\\\n2025-11-28T06:48:19Z [verbose] multus-daemon started\\\\n2025-11-28T06:48:19Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:49:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.917199 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.931755 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.946650 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.946733 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.946779 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.946846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.947012 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:04Z","lastTransitionTime":"2025-11-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.947270 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.961147 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:04 crc kubenswrapper[4889]: I1128 06:49:04.985984 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.002230 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.018011 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.034164 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.049555 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.049629 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.049653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.049684 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.049740 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.052147 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.071551 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.088722 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.114254 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.132754 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.153361 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.153428 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.153442 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.153468 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.153488 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.257773 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.257843 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.257855 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.257873 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.257887 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.331667 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:05 crc kubenswrapper[4889]: E1128 06:49:05.332009 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.360142 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.360226 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.360243 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.360272 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.360291 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.463990 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.464038 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.464047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.464066 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.464078 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.566518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.566605 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.566625 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.566653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.566672 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.669828 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.669892 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.669912 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.669935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.669948 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.772906 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.772960 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.772970 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.772988 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.772998 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.824245 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/0.log" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.824343 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vtjm7" event={"ID":"68ddfdcf-000e-45ae-a737-d3dd28115d5b","Type":"ContainerStarted","Data":"ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.840580 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.859882 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.875840 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.875884 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.875894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.875915 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.875928 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.879429 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.891626 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.906105 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"2025-11-28T06:48:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948\\\\n2025-11-28T06:48:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948 to /host/opt/cni/bin/\\\\n2025-11-28T06:48:19Z [verbose] multus-daemon started\\\\n2025-11-28T06:48:19Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:49:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.926000 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.946054 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.964381 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.978301 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.978380 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.978406 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.978444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.978475 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:05Z","lastTransitionTime":"2025-11-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:05 crc kubenswrapper[4889]: I1128 06:49:05.982693 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.008955 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.033006 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.058629 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.080226 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.081407 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.081453 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.081469 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.081492 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.081508 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.107840 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.125966 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.144929 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.166390 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.184420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.184490 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.184511 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.184539 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.184561 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.286997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.287080 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.287102 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.287139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.287164 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.331510 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.331604 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.331670 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:06 crc kubenswrapper[4889]: E1128 06:49:06.331925 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:06 crc kubenswrapper[4889]: E1128 06:49:06.332065 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:06 crc kubenswrapper[4889]: E1128 06:49:06.332196 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.390983 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.391027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.391043 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.391062 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.391079 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.494618 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.494669 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.494681 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.494701 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.494730 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.597179 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.597259 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.597294 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.597318 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.597335 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.700518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.700565 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.700575 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.700591 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.700602 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.803993 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.804049 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.804059 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.804077 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.804091 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.906769 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.906847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.906865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.906895 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:06 crc kubenswrapper[4889]: I1128 06:49:06.906915 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:06Z","lastTransitionTime":"2025-11-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.009968 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.010029 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.010048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.010074 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.010093 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.113334 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.113403 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.113421 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.113451 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.113470 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.216044 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.216085 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.216095 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.216114 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.216128 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.319458 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.319506 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.319515 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.319535 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.319547 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.331022 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:07 crc kubenswrapper[4889]: E1128 06:49:07.331179 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.349615 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.370215 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.392750 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.410217 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.422512 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.422557 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.422570 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.422593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.422613 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.424687 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.443220 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"2025-11-28T06:48:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948\\\\n2025-11-28T06:48:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948 to /host/opt/cni/bin/\\\\n2025-11-28T06:48:19Z [verbose] multus-daemon started\\\\n2025-11-28T06:48:19Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:49:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.457995 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.471454 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.492999 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.512876 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.528056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.528127 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.528143 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.528164 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.528198 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.534546 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.549821 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.568005 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.600602 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.618805 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.631661 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.631731 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.631745 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.631769 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.631785 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.643529 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.658076 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.734896 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.734953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.734971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.734997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.735016 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.837455 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.837564 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.837579 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.837598 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.837613 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.940216 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.940275 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.940291 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.940312 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:07 crc kubenswrapper[4889]: I1128 06:49:07.940327 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:07Z","lastTransitionTime":"2025-11-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.043292 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.043337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.043347 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.043362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.043374 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.146777 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.146850 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.146870 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.146898 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.146917 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.249586 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.249636 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.249647 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.249665 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.249677 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.331454 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.331496 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.331472 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:08 crc kubenswrapper[4889]: E1128 06:49:08.331609 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:08 crc kubenswrapper[4889]: E1128 06:49:08.331766 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:08 crc kubenswrapper[4889]: E1128 06:49:08.331832 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.352309 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.352360 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.352373 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.352391 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.352404 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.454079 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.454123 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.454135 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.454150 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.454162 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.556900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.557005 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.557024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.557051 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.557071 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.659393 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.659465 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.659481 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.659507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.659528 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.762913 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.762968 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.762980 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.762999 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.763013 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.865126 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.865168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.865180 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.865197 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.865208 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.968588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.968641 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.968650 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.968668 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:08 crc kubenswrapper[4889]: I1128 06:49:08.968699 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:08Z","lastTransitionTime":"2025-11-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.072060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.072112 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.072122 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.072139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.072149 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.174508 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.174550 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.174560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.174576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.174587 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.277455 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.277498 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.277507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.277524 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.277536 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.331754 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:09 crc kubenswrapper[4889]: E1128 06:49:09.332523 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.333232 4889 scope.go:117] "RemoveContainer" containerID="118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.381468 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.381495 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.381505 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.381521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.381532 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.483763 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.483793 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.483803 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.483822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.483832 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.586144 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.586192 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.586210 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.586233 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.586246 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.692979 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.693045 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.693062 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.693086 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.693107 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.795877 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.795926 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.795940 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.795957 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.795970 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.839853 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/2.log" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.844505 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.845922 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.860760 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.872101 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.887320 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.898660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.898756 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.898778 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.898807 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.898826 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:09Z","lastTransitionTime":"2025-11-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.902573 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.914976 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.931655 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"2025-11-28T06:48:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948\\\\n2025-11-28T06:48:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948 to /host/opt/cni/bin/\\\\n2025-11-28T06:48:19Z [verbose] multus-daemon started\\\\n2025-11-28T06:48:19Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:49:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.946422 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.961831 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.977571 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:09 crc kubenswrapper[4889]: I1128 06:49:09.991476 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.001286 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.001327 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.001340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.001361 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.001376 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.010037 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.025663 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.039424 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.053431 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.104407 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.104460 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.104474 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.104495 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.104509 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.111223 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.140169 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.161912 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.207314 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.207358 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.207370 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.207388 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.207402 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.309559 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.309617 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.309631 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.309654 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.309668 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.331346 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.331542 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.331676 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:10 crc kubenswrapper[4889]: E1128 06:49:10.331787 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:10 crc kubenswrapper[4889]: E1128 06:49:10.331858 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:10 crc kubenswrapper[4889]: E1128 06:49:10.331682 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.413005 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.413092 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.413118 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.413147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.413171 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.516520 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.516582 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.516605 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.516639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.516665 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.619363 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.619432 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.619452 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.619480 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.619501 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.722556 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.722646 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.722670 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.722704 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.722762 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.826167 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.826229 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.826247 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.826274 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.826292 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.851098 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/3.log" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.852119 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/2.log" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.856611 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" exitCode=1 Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.856687 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.856804 4889 scope.go:117] "RemoveContainer" containerID="118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.857979 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 06:49:10 crc kubenswrapper[4889]: E1128 06:49:10.858262 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.885136 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.902348 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.920484 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.929547 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.929620 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.929640 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.929670 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.929766 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:10Z","lastTransitionTime":"2025-11-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.944322 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"2025-11-28T06:48:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948\\\\n2025-11-28T06:48:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948 to /host/opt/cni/bin/\\\\n2025-11-28T06:48:19Z [verbose] multus-daemon started\\\\n2025-11-28T06:48:19Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:49:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:10 crc kubenswrapper[4889]: I1128 06:49:10.978423 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.001332 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.025243 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.032932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.033007 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.033033 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.033067 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.033089 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.045625 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.068009 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.086822 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.090939 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.090988 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.091005 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.091032 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.091051 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.110086 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: E1128 06:49:11.111419 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.120424 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.120476 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.120495 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.120522 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.120540 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.135440 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: E1128 06:49:11.144754 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.150078 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.150117 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.150134 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.150161 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.150180 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.154342 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.174338 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: E1128 06:49:11.176584 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.181250 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.181305 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.181324 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.181349 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.181368 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.203373 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: E1128 06:49:11.207803 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.212697 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.212750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.212763 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.212785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.212800 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.229492 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://118375f9d1048263a800981e104681d4cc49465eb9ad203fec44aa3ba184cddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:48:43Z\\\",\\\"message\\\":\\\"ller-manager-crc in node crc\\\\nI1128 06:48:43.054330 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 06:48:43.054338 6499 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 06:48:43.054371 6499 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054391 6499 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 06:48:43.054399 6499 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 06:48:43.054406 6499 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1128 06:48:43.054412 6499 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1128 06:48:43.054416 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:10Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI1128 06:49:10.293900 6855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:49:10.294093 6855 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:49:10.294400 6855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:49:10.295078 6855 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:49:10.295205 6855 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 06:49:10.295215 6855 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:49:10.295242 6855 factory.go:656] Stopping watch factory\\\\nI1128 06:49:10.295284 6855 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:49:10.313449 6855 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 06:49:10.313472 6855 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 06:49:10.313563 6855 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:49:10.313592 6855 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:49:10.313664 6855 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: E1128 06:49:11.231591 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"980f1d8a-b8dc-483a-92cf-447ce2d2f4e8\\\",\\\"systemUUID\\\":\\\"c2965de2-18dd-4931-940c-3947028e6c9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: E1128 06:49:11.232772 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.235338 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.235413 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.235437 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.235470 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.235496 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.247470 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.330894 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:11 crc kubenswrapper[4889]: E1128 06:49:11.331145 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.338637 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.338692 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.338738 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.338767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.338786 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.442480 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.442532 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.442550 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.442575 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.442593 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.546276 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.546353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.546370 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.546400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.546420 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.649831 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.649900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.649946 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.649975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.649995 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.753571 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.753633 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.753654 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.753678 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.753698 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.857696 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.857875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.857894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.857926 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.857947 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.864882 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/3.log" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.871601 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 06:49:11 crc kubenswrapper[4889]: E1128 06:49:11.872099 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.891964 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.909455 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.932491 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"2025-11-28T06:48:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948\\\\n2025-11-28T06:48:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948 to /host/opt/cni/bin/\\\\n2025-11-28T06:48:19Z [verbose] multus-daemon started\\\\n2025-11-28T06:48:19Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:49:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.955383 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.969114 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.969198 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.969215 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.969241 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.969262 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:11Z","lastTransitionTime":"2025-11-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:11 crc kubenswrapper[4889]: I1128 06:49:11.984673 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.009256 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.023335 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.048096 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.069480 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.073067 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.073136 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.073156 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.073185 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.073208 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.091338 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.112361 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.131502 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.152865 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.176045 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.176140 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.176168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.176202 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.176225 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.188629 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:10Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI1128 06:49:10.293900 6855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:49:10.294093 6855 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:49:10.294400 6855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:49:10.295078 6855 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:49:10.295205 6855 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 06:49:10.295215 6855 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:49:10.295242 6855 factory.go:656] Stopping watch factory\\\\nI1128 06:49:10.295284 6855 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:49:10.313449 6855 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 06:49:10.313472 6855 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 06:49:10.313563 6855 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:49:10.313592 6855 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:49:10.313664 6855 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:49:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.209383 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.234361 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.256879 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:12Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.280013 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.280071 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.280089 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.280117 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.280139 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.331182 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.331288 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:12 crc kubenswrapper[4889]: E1128 06:49:12.331359 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:12 crc kubenswrapper[4889]: E1128 06:49:12.331545 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.331898 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:12 crc kubenswrapper[4889]: E1128 06:49:12.332029 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.385081 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.385146 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.385159 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.385181 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.385394 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.488347 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.488411 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.488427 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.488454 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.488472 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.592638 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.592777 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.592807 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.592846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.592869 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.696024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.696104 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.696125 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.696158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.696178 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.798928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.799000 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.799019 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.799047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.799068 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.917177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.917245 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.917261 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.917283 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:12 crc kubenswrapper[4889]: I1128 06:49:12.917299 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:12Z","lastTransitionTime":"2025-11-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.021595 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.021643 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.021655 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.021675 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.021688 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.124604 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.124671 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.124690 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.124751 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.124768 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.227951 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.228007 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.228021 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.228042 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.228057 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.330765 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.330824 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.330841 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.330957 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.330991 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: E1128 06:49:13.330991 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.331012 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.433916 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.433978 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.433997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.434026 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.434038 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.536129 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.536179 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.536190 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.536207 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.536222 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.639814 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.639906 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.639931 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.639973 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.639998 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.743515 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.743576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.743588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.743607 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.743621 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.847064 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.847119 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.847134 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.847155 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.847175 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.950985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.951035 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.951049 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.951068 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:13 crc kubenswrapper[4889]: I1128 06:49:13.951082 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:13Z","lastTransitionTime":"2025-11-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.053843 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.053905 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.053924 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.054652 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.054876 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.158984 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.159060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.159078 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.159105 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.159123 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.262840 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.262887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.262899 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.262919 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.262933 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.330979 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.331119 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:14 crc kubenswrapper[4889]: E1128 06:49:14.331190 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.331008 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:14 crc kubenswrapper[4889]: E1128 06:49:14.331297 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:14 crc kubenswrapper[4889]: E1128 06:49:14.331484 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.365391 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.365444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.365458 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.365478 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.365492 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.468264 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.468328 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.468338 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.468353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.468364 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.570893 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.570939 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.570949 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.570967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.570980 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.673928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.674005 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.674027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.674059 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.674082 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.777675 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.777754 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.777779 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.777802 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.777817 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.880175 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.880214 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.880224 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.880238 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.880250 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.983085 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.983121 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.983129 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.983144 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:14 crc kubenswrapper[4889]: I1128 06:49:14.983153 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:14Z","lastTransitionTime":"2025-11-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.085671 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.085732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.085745 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.085760 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.085773 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.188250 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.188311 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.188328 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.188357 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.188380 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.291940 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.292017 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.292039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.292071 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.292207 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.331211 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:15 crc kubenswrapper[4889]: E1128 06:49:15.331440 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.395314 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.395407 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.395430 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.395475 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.395501 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.498752 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.498820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.498836 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.498865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.498900 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.601985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.602068 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.602094 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.602124 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.602146 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.705589 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.705648 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.705666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.705696 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.705749 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.808948 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.809047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.809073 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.809115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.809142 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.911865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.911927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.911941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.911964 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:15 crc kubenswrapper[4889]: I1128 06:49:15.911981 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:15Z","lastTransitionTime":"2025-11-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.014633 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.014748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.014768 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.014796 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.014815 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.117513 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.117555 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.117570 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.117589 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.117602 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.220465 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.220524 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.220538 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.220556 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.220572 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.323761 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.323828 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.323847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.323874 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.323897 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.330970 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.331048 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.331129 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:16 crc kubenswrapper[4889]: E1128 06:49:16.331130 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:16 crc kubenswrapper[4889]: E1128 06:49:16.331295 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:16 crc kubenswrapper[4889]: E1128 06:49:16.331453 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.427852 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.427983 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.428016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.428050 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.428095 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.531323 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.531392 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.531409 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.531471 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.531490 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.634694 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.634787 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.634805 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.634836 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.634855 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.737591 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.737654 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.737677 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.737741 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.737769 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.840125 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.840177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.840189 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.840207 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.840219 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.944623 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.944673 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.944691 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.944742 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:16 crc kubenswrapper[4889]: I1128 06:49:16.944762 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:16Z","lastTransitionTime":"2025-11-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.047298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.047349 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.047367 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.047394 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.047416 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.150897 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.150976 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.150998 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.151030 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.151050 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.254099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.254165 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.254185 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.254209 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.254229 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.331898 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:17 crc kubenswrapper[4889]: E1128 06:49:17.332137 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.348519 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.351316 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m98zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4901957d-ef15-4af5-a61b-b3d632c871d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6cead9c6686955a78e4a898cae7c55d4b83597cd00df1182ed91dfeda98192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f59c5aad3fd459235b77888f8c16813cd098fc3becd0c000e5b6112f7b20426d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c74b5d54459c02ca30375809d4434f40d453b38566ba79bdc42e2b9c4a58171a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f42f56ff27818c8fa3afd7f79fbd11d0f52051f6fe00844bcc26c7aa9a07ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca89191981cf37337c022f772c1197a8384c15207b9bef67585765e38500df43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ed6f2762b817e06e6097c172ef98f870663ed23361a302ff895d0ae53c8be4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8647402e53b5756b9c7ea01ce93cbc1f595beefce3e555c2609d4e99f3387a95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4fxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m98zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.357518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.357576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.357596 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.357626 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.357666 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.370231 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48xq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473fe0ca-e884-4f0a-8c28-4994f487ca5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4de39174b7bae3402139a38ab82339a71ef333ab7c888b0eb7f553e93899af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhr52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48xq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.392497 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vtjm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ddfdcf-000e-45ae-a737-d3dd28115d5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:04Z\\\",\\\"message\\\":\\\"2025-11-28T06:48:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948\\\\n2025-11-28T06:48:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2b2a648b-393d-40c4-b0ba-1c30f2e0e948 to /host/opt/cni/bin/\\\\n2025-11-28T06:48:19Z [verbose] multus-daemon started\\\\n2025-11-28T06:48:19Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:49:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x69mv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vtjm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.413603 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.434476 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.457778 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b69fc7a1271584e3e0911347b0063997f72a962d75d9a40d7af6bb4e3d43191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.460083 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.460161 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.460186 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.460221 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.460242 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.476331 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd67b7209328337a22acb6c3d9598701097f2b685190b3c96dfd179e0944298a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.493845 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8glkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e310263-912f-4269-81da-423af72f5ffc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d34d06ff8b76e58d331c6ac888d2984f6100531255ebf6d6d3550463ace036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8glkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.515294 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027e3d13-3693-4e70-bd3a-e63d0faa96f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764312490\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764312489\\\\\\\\\\\\\\\" (2025-11-28 05:48:09 +0000 UTC to 2026-11-28 05:48:09 +0000 UTC (now=2025-11-28 06:48:16.041123892 +0000 UTC))\\\\\\\"\\\\nI1128 06:48:16.041229 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 06:48:16.041311 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 06:48:16.041387 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041424 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 06:48:16.041508 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-311302179/tls.crt::/tmp/serving-cert-311302179/tls.key\\\\\\\"\\\\nI1128 06:48:16.041790 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 06:48:16.042225 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042287 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1128 06:48:16.042336 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 06:48:16.042364 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 06:48:16.042611 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1128 06:48:16.042640 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1128 06:48:16.043816 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.533590 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d9f7f0-60ff-4fa6-878e-8f6033e4d147\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f70d22dafe13b089c23c7460d4647336bdfd756455e6c12dd66cba62df9bbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87ad6bf60e02f8831a7131d5570ffc4c6e696b24c69f5d0ce4433e8c5000dc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0a0db23d63c438ab31e7bfc137963d158e82d65b646fccafdd5fe63001fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f639f28a620a5d8f1dbe9f75c0e0bb2813f4947180ae642ea5b3cea6bf3617f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.552539 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a445581e185c50ac61ababea39f68d0a658e658f76fea84a8b09122ad3de8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83c4ffcb495a9d4d577da26011b176b43f95a5d87c2952ae11788f353ce9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.564493 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.564593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.564613 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.564644 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.564665 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.572119 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.592844 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a6707da-48a9-4e38-a1b2-df82148f0cd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be307fc6d3dc31a0e801a2c4af6cbc1ec7671a70648f93b2e925d5909758b7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.611755 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxfbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mbrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.631821 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37489316-e6f0-4c63-ae10-78983fb84bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbbd7ea9dc20510a884d1f2dd0a2b2db29c52176e3e5bcc456d6b2ea7351214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7d9172a5405a69bafc719f649a62a3a6e15b1cf2b2fabd958b30c33b4e86b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de722af81c92cda96edc44e91e0f6e2165c775b4f93834f56410660a8cd8bb08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.666308 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de1d273-3dcf-4772-bc88-323f46e1ead5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:49:10Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI1128 06:49:10.293900 6855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:49:10.294093 6855 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:49:10.294400 6855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:49:10.295078 6855 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:49:10.295205 6855 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 06:49:10.295215 6855 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:49:10.295242 6855 factory.go:656] Stopping watch factory\\\\nI1128 06:49:10.295284 6855 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:49:10.313449 6855 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 06:49:10.313472 6855 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 06:49:10.313563 6855 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:49:10.313592 6855 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:49:10.313664 6855 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:49:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:48:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvxwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2l6bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.668016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.668075 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.668092 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.668148 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.668169 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.685638 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13e49a78-73ea-47f8-8937-49dad3a59ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369c4337e0dcbaa4d08905b62f816f051171673fdcff2c7d4299aa548646907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5437960f6c6d114838b667bb926865a2c21004518fd7c71eb55f27084bc2d875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:48:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbs8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:49:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.770782 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.770837 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.770854 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.770875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.770889 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.879408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.879476 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.879493 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.879519 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.879537 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.982097 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.982162 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.982180 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.982207 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:17 crc kubenswrapper[4889]: I1128 06:49:17.982229 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:17Z","lastTransitionTime":"2025-11-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.084987 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.085027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.085039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.085081 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.085095 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.187329 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.187364 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.187375 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.187393 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.187405 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.291051 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.291131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.291160 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.291193 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.291225 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.331364 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.331529 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:18 crc kubenswrapper[4889]: E1128 06:49:18.331676 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.331965 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:18 crc kubenswrapper[4889]: E1128 06:49:18.332232 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:18 crc kubenswrapper[4889]: E1128 06:49:18.332312 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.393167 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.393242 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.393264 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.393293 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.393315 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.495880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.495939 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.495952 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.495971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.495984 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.599353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.599430 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.599450 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.599478 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.599497 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.703002 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.703381 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.703405 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.703435 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.703459 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.806923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.806993 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.807019 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.807048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.807070 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.909820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.909884 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.909907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.909937 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:18 crc kubenswrapper[4889]: I1128 06:49:18.909959 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:18Z","lastTransitionTime":"2025-11-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.013661 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.013767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.013791 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.013819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.013840 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.116969 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.117045 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.117069 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.117100 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.117123 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.220805 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.220872 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.220888 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.220914 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.220932 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.325024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.325102 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.325120 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.325147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.325168 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.331461 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:19 crc kubenswrapper[4889]: E1128 06:49:19.331668 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.428301 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.428401 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.428419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.428448 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.428470 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.532666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.532780 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.532805 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.532831 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.532853 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.636383 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.636485 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.636502 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.636529 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.636548 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.740759 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.740827 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.740848 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.740880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.740903 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.844176 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.844329 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.844385 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.844899 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.844935 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.948269 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.948350 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.948368 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.948395 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:19 crc kubenswrapper[4889]: I1128 06:49:19.948414 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:19Z","lastTransitionTime":"2025-11-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.051215 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.051323 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.051410 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.051450 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.051475 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.132503 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.132780 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.132753084 +0000 UTC m=+147.102987249 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.154517 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.154566 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.154577 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.154599 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.154614 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.234581 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.234701 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.234820 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.234878 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.234931 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235029 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235062 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.235027959 +0000 UTC m=+147.205262154 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235078 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235095 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235106 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235109 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235172 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235192 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.235150563 +0000 UTC m=+147.205384908 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235196 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235229 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.235210374 +0000 UTC m=+147.205444779 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.235285 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.235256236 +0000 UTC m=+147.205490421 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.257990 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.258088 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.258115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.258149 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.258173 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.331066 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.331121 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.331082 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.331279 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.331406 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:20 crc kubenswrapper[4889]: E1128 06:49:20.332310 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.361832 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.361893 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.361908 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.361926 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.361937 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.466258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.466342 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.466372 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.466407 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.466432 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.570979 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.571081 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.571128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.571156 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.571176 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.675595 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.675662 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.675682 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.675749 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.675771 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.779129 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.779196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.779213 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.779242 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.779261 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.882289 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.882353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.882372 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.882399 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.882418 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.985916 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.985983 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.986001 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.986028 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:20 crc kubenswrapper[4889]: I1128 06:49:20.986049 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:20Z","lastTransitionTime":"2025-11-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.089074 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.089145 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.089165 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.089195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.089214 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:21Z","lastTransitionTime":"2025-11-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.191030 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.191067 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.191076 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.191092 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.191103 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:21Z","lastTransitionTime":"2025-11-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.255931 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.255963 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.255971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.255985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.255996 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:49:21Z","lastTransitionTime":"2025-11-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.331085 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:21 crc kubenswrapper[4889]: E1128 06:49:21.331212 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.955880 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz"] Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.956461 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.959772 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.960218 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.960483 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.960631 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 06:49:21 crc kubenswrapper[4889]: I1128 06:49:21.989255 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=59.989215359 podStartE2EDuration="59.989215359s" podCreationTimestamp="2025-11-28 06:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:21.989013904 +0000 UTC m=+84.959248069" watchObservedRunningTime="2025-11-28 06:49:21.989215359 +0000 UTC m=+84.959449554" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.055226 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20397094-57ec-48f2-b34c-09e33719fc00-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.055369 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20397094-57ec-48f2-b34c-09e33719fc00-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.055437 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20397094-57ec-48f2-b34c-09e33719fc00-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.055556 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20397094-57ec-48f2-b34c-09e33719fc00-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.055608 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20397094-57ec-48f2-b34c-09e33719fc00-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.062755 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbs8p" podStartSLOduration=65.062699979 podStartE2EDuration="1m5.062699979s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.034933462 +0000 UTC m=+85.005167657" watchObservedRunningTime="2025-11-28 06:49:22.062699979 +0000 UTC m=+85.032934134" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.087431 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-m98zh" podStartSLOduration=65.087406352 podStartE2EDuration="1m5.087406352s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.087218507 +0000 UTC m=+85.057452692" watchObservedRunningTime="2025-11-28 06:49:22.087406352 +0000 UTC m=+85.057640507" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.110217 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-48xq6" podStartSLOduration=65.110185171 podStartE2EDuration="1m5.110185171s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.109529173 +0000 UTC m=+85.079763358" watchObservedRunningTime="2025-11-28 06:49:22.110185171 +0000 UTC m=+85.080419366" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.156810 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20397094-57ec-48f2-b34c-09e33719fc00-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.156887 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20397094-57ec-48f2-b34c-09e33719fc00-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.156911 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20397094-57ec-48f2-b34c-09e33719fc00-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.156930 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20397094-57ec-48f2-b34c-09e33719fc00-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.156960 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20397094-57ec-48f2-b34c-09e33719fc00-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.157032 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20397094-57ec-48f2-b34c-09e33719fc00-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.157010 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20397094-57ec-48f2-b34c-09e33719fc00-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.158865 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20397094-57ec-48f2-b34c-09e33719fc00-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.166092 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20397094-57ec-48f2-b34c-09e33719fc00-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.182258 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20397094-57ec-48f2-b34c-09e33719fc00-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6ftzz\" (UID: \"20397094-57ec-48f2-b34c-09e33719fc00\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.182514 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8glkz" podStartSLOduration=66.182486148 podStartE2EDuration="1m6.182486148s" podCreationTimestamp="2025-11-28 06:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.162448835 +0000 UTC m=+85.132683000" watchObservedRunningTime="2025-11-28 06:49:22.182486148 +0000 UTC m=+85.152720323" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.182897 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vtjm7" podStartSLOduration=65.18288804 podStartE2EDuration="1m5.18288804s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.181693407 +0000 UTC m=+85.151927572" watchObservedRunningTime="2025-11-28 06:49:22.18288804 +0000 UTC m=+85.153122205" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.199852 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.199820767 podStartE2EDuration="5.199820767s" podCreationTimestamp="2025-11-28 06:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.1988206 +0000 UTC m=+85.169054785" watchObservedRunningTime="2025-11-28 06:49:22.199820767 +0000 UTC m=+85.170054962" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.262825 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podStartSLOduration=65.262798247 podStartE2EDuration="1m5.262798247s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.246049944 +0000 UTC m=+85.216284119" watchObservedRunningTime="2025-11-28 06:49:22.262798247 +0000 UTC m=+85.233032412" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.272536 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.310645 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.310617638 podStartE2EDuration="1m6.310617638s" podCreationTimestamp="2025-11-28 06:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.292677843 +0000 UTC m=+85.262912018" watchObservedRunningTime="2025-11-28 06:49:22.310617638 +0000 UTC m=+85.280851803" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.330671 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:22 crc kubenswrapper[4889]: E1128 06:49:22.330875 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.331158 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:22 crc kubenswrapper[4889]: E1128 06:49:22.331269 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.331477 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:22 crc kubenswrapper[4889]: E1128 06:49:22.331634 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.331998 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.331969458 podStartE2EDuration="38.331969458s" podCreationTimestamp="2025-11-28 06:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.311993526 +0000 UTC m=+85.282227701" watchObservedRunningTime="2025-11-28 06:49:22.331969458 +0000 UTC m=+85.302203613" Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.912091 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" event={"ID":"20397094-57ec-48f2-b34c-09e33719fc00","Type":"ContainerStarted","Data":"339432b68b5425cde8d76b7434e07fb21f98274f426d1d189cf48d4346c1c49e"} Nov 28 06:49:22 crc kubenswrapper[4889]: I1128 06:49:22.912159 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" event={"ID":"20397094-57ec-48f2-b34c-09e33719fc00","Type":"ContainerStarted","Data":"ee4e1b08f971c546c1261ed4514cdfa540b8177310238e65481795e0f24fb991"} Nov 28 06:49:23 crc kubenswrapper[4889]: I1128 06:49:23.335688 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:23 crc kubenswrapper[4889]: E1128 06:49:23.335941 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:23 crc kubenswrapper[4889]: I1128 06:49:23.337446 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 06:49:23 crc kubenswrapper[4889]: E1128 06:49:23.337864 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:49:24 crc kubenswrapper[4889]: I1128 06:49:24.330676 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:24 crc kubenswrapper[4889]: I1128 06:49:24.330750 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:24 crc kubenswrapper[4889]: E1128 06:49:24.330861 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:24 crc kubenswrapper[4889]: I1128 06:49:24.330750 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:24 crc kubenswrapper[4889]: E1128 06:49:24.331006 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:24 crc kubenswrapper[4889]: E1128 06:49:24.331138 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:25 crc kubenswrapper[4889]: I1128 06:49:25.330889 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:25 crc kubenswrapper[4889]: E1128 06:49:25.331406 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:26 crc kubenswrapper[4889]: I1128 06:49:26.331081 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:26 crc kubenswrapper[4889]: I1128 06:49:26.331164 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:26 crc kubenswrapper[4889]: E1128 06:49:26.331253 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:26 crc kubenswrapper[4889]: I1128 06:49:26.331190 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:26 crc kubenswrapper[4889]: E1128 06:49:26.331352 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:26 crc kubenswrapper[4889]: E1128 06:49:26.331467 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:27 crc kubenswrapper[4889]: I1128 06:49:27.331354 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:27 crc kubenswrapper[4889]: E1128 06:49:27.332914 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:28 crc kubenswrapper[4889]: I1128 06:49:28.331378 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:28 crc kubenswrapper[4889]: I1128 06:49:28.331550 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:28 crc kubenswrapper[4889]: I1128 06:49:28.331701 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:28 crc kubenswrapper[4889]: E1128 06:49:28.331869 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:28 crc kubenswrapper[4889]: E1128 06:49:28.332036 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:28 crc kubenswrapper[4889]: E1128 06:49:28.332235 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:29 crc kubenswrapper[4889]: I1128 06:49:29.331538 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:29 crc kubenswrapper[4889]: E1128 06:49:29.331747 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:30 crc kubenswrapper[4889]: I1128 06:49:30.331525 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:30 crc kubenswrapper[4889]: I1128 06:49:30.331528 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:30 crc kubenswrapper[4889]: E1128 06:49:30.331801 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:30 crc kubenswrapper[4889]: I1128 06:49:30.332047 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:30 crc kubenswrapper[4889]: E1128 06:49:30.332325 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:30 crc kubenswrapper[4889]: E1128 06:49:30.332500 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:31 crc kubenswrapper[4889]: I1128 06:49:31.331586 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:31 crc kubenswrapper[4889]: E1128 06:49:31.332012 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:32 crc kubenswrapper[4889]: I1128 06:49:32.331539 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:32 crc kubenswrapper[4889]: I1128 06:49:32.331669 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:32 crc kubenswrapper[4889]: E1128 06:49:32.331825 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:32 crc kubenswrapper[4889]: I1128 06:49:32.331692 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:32 crc kubenswrapper[4889]: E1128 06:49:32.332050 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:32 crc kubenswrapper[4889]: E1128 06:49:32.332361 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:33 crc kubenswrapper[4889]: I1128 06:49:33.331121 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:33 crc kubenswrapper[4889]: E1128 06:49:33.331465 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:34 crc kubenswrapper[4889]: I1128 06:49:34.330982 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:34 crc kubenswrapper[4889]: I1128 06:49:34.331085 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:34 crc kubenswrapper[4889]: I1128 06:49:34.331010 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:34 crc kubenswrapper[4889]: E1128 06:49:34.331225 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:34 crc kubenswrapper[4889]: E1128 06:49:34.331416 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:34 crc kubenswrapper[4889]: E1128 06:49:34.331571 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:35 crc kubenswrapper[4889]: I1128 06:49:35.120892 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:35 crc kubenswrapper[4889]: E1128 06:49:35.121259 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:49:35 crc kubenswrapper[4889]: E1128 06:49:35.121406 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs podName:e209e335-9f44-41a8-a8f2-093d2bdcfe6b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:39.121374945 +0000 UTC m=+162.091609140 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs") pod "network-metrics-daemon-mbrtc" (UID: "e209e335-9f44-41a8-a8f2-093d2bdcfe6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:49:35 crc kubenswrapper[4889]: I1128 06:49:35.331571 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:35 crc kubenswrapper[4889]: E1128 06:49:35.331890 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:35 crc kubenswrapper[4889]: I1128 06:49:35.332865 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 06:49:35 crc kubenswrapper[4889]: E1128 06:49:35.333157 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:49:36 crc kubenswrapper[4889]: I1128 06:49:36.330686 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:36 crc kubenswrapper[4889]: I1128 06:49:36.330857 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:36 crc kubenswrapper[4889]: E1128 06:49:36.331281 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:36 crc kubenswrapper[4889]: E1128 06:49:36.331449 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:36 crc kubenswrapper[4889]: I1128 06:49:36.330939 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:36 crc kubenswrapper[4889]: E1128 06:49:36.331563 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:37 crc kubenswrapper[4889]: I1128 06:49:37.331210 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:37 crc kubenswrapper[4889]: E1128 06:49:37.332272 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:38 crc kubenswrapper[4889]: I1128 06:49:38.331089 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:38 crc kubenswrapper[4889]: I1128 06:49:38.331181 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:38 crc kubenswrapper[4889]: I1128 06:49:38.331089 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:38 crc kubenswrapper[4889]: E1128 06:49:38.332012 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:38 crc kubenswrapper[4889]: E1128 06:49:38.332228 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:38 crc kubenswrapper[4889]: E1128 06:49:38.331968 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:39 crc kubenswrapper[4889]: I1128 06:49:39.330840 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:39 crc kubenswrapper[4889]: E1128 06:49:39.331907 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:40 crc kubenswrapper[4889]: I1128 06:49:40.331702 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:40 crc kubenswrapper[4889]: I1128 06:49:40.331750 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:40 crc kubenswrapper[4889]: I1128 06:49:40.331754 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:40 crc kubenswrapper[4889]: E1128 06:49:40.333090 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:40 crc kubenswrapper[4889]: E1128 06:49:40.333260 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:40 crc kubenswrapper[4889]: E1128 06:49:40.333489 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:41 crc kubenswrapper[4889]: I1128 06:49:41.331230 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:41 crc kubenswrapper[4889]: E1128 06:49:41.331433 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:42 crc kubenswrapper[4889]: I1128 06:49:42.331738 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:42 crc kubenswrapper[4889]: I1128 06:49:42.331816 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:42 crc kubenswrapper[4889]: I1128 06:49:42.331961 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:42 crc kubenswrapper[4889]: E1128 06:49:42.332182 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:42 crc kubenswrapper[4889]: E1128 06:49:42.332317 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:42 crc kubenswrapper[4889]: E1128 06:49:42.332541 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:42 crc kubenswrapper[4889]: I1128 06:49:42.356620 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6ftzz" podStartSLOduration=85.356590439 podStartE2EDuration="1m25.356590439s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:22.932224959 +0000 UTC m=+85.902459154" watchObservedRunningTime="2025-11-28 06:49:42.356590439 +0000 UTC m=+105.326824634" Nov 28 06:49:42 crc kubenswrapper[4889]: I1128 06:49:42.357469 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 06:49:43 crc kubenswrapper[4889]: I1128 06:49:43.331023 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:43 crc kubenswrapper[4889]: E1128 06:49:43.331291 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:44 crc kubenswrapper[4889]: I1128 06:49:44.331847 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:44 crc kubenswrapper[4889]: I1128 06:49:44.331896 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:44 crc kubenswrapper[4889]: E1128 06:49:44.333052 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:44 crc kubenswrapper[4889]: E1128 06:49:44.333132 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:44 crc kubenswrapper[4889]: I1128 06:49:44.331897 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:44 crc kubenswrapper[4889]: E1128 06:49:44.333242 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:45 crc kubenswrapper[4889]: I1128 06:49:45.331269 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:45 crc kubenswrapper[4889]: E1128 06:49:45.332021 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:46 crc kubenswrapper[4889]: I1128 06:49:46.331202 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:46 crc kubenswrapper[4889]: I1128 06:49:46.331196 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:46 crc kubenswrapper[4889]: E1128 06:49:46.331635 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:46 crc kubenswrapper[4889]: E1128 06:49:46.331906 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:46 crc kubenswrapper[4889]: I1128 06:49:46.332083 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:46 crc kubenswrapper[4889]: E1128 06:49:46.332346 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:47 crc kubenswrapper[4889]: I1128 06:49:47.331391 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:47 crc kubenswrapper[4889]: E1128 06:49:47.333540 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:47 crc kubenswrapper[4889]: I1128 06:49:47.334873 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 06:49:47 crc kubenswrapper[4889]: E1128 06:49:47.335200 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2l6bn_openshift-ovn-kubernetes(6de1d273-3dcf-4772-bc88-323f46e1ead5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" Nov 28 06:49:48 crc kubenswrapper[4889]: I1128 06:49:48.331827 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:48 crc kubenswrapper[4889]: I1128 06:49:48.331873 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:48 crc kubenswrapper[4889]: E1128 06:49:48.332086 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:48 crc kubenswrapper[4889]: I1128 06:49:48.332201 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:48 crc kubenswrapper[4889]: E1128 06:49:48.332473 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:48 crc kubenswrapper[4889]: E1128 06:49:48.332625 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:49 crc kubenswrapper[4889]: I1128 06:49:49.330900 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:49 crc kubenswrapper[4889]: E1128 06:49:49.331137 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:50 crc kubenswrapper[4889]: I1128 06:49:50.330963 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:50 crc kubenswrapper[4889]: I1128 06:49:50.331049 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:50 crc kubenswrapper[4889]: E1128 06:49:50.331148 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:50 crc kubenswrapper[4889]: I1128 06:49:50.331274 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:50 crc kubenswrapper[4889]: E1128 06:49:50.331456 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:50 crc kubenswrapper[4889]: E1128 06:49:50.331533 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:51 crc kubenswrapper[4889]: I1128 06:49:51.026642 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/1.log" Nov 28 06:49:51 crc kubenswrapper[4889]: I1128 06:49:51.027280 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/0.log" Nov 28 06:49:51 crc kubenswrapper[4889]: I1128 06:49:51.027358 4889 generic.go:334] "Generic (PLEG): container finished" podID="68ddfdcf-000e-45ae-a737-d3dd28115d5b" containerID="ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b" exitCode=1 Nov 28 06:49:51 crc kubenswrapper[4889]: I1128 06:49:51.027419 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vtjm7" event={"ID":"68ddfdcf-000e-45ae-a737-d3dd28115d5b","Type":"ContainerDied","Data":"ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b"} Nov 28 06:49:51 crc kubenswrapper[4889]: I1128 06:49:51.027492 4889 scope.go:117] "RemoveContainer" containerID="c4ba5d926e731b05e064144c752caad022b756ae42bb746e027df2fb16b7358c" Nov 28 06:49:51 crc kubenswrapper[4889]: I1128 06:49:51.028099 4889 scope.go:117] "RemoveContainer" containerID="ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b" Nov 28 06:49:51 crc kubenswrapper[4889]: E1128 06:49:51.028421 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-vtjm7_openshift-multus(68ddfdcf-000e-45ae-a737-d3dd28115d5b)\"" pod="openshift-multus/multus-vtjm7" podUID="68ddfdcf-000e-45ae-a737-d3dd28115d5b" Nov 28 06:49:51 crc kubenswrapper[4889]: I1128 06:49:51.062613 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.06258004 podStartE2EDuration="9.06258004s" podCreationTimestamp="2025-11-28 06:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:49:47.381150222 +0000 UTC m=+110.351384457" watchObservedRunningTime="2025-11-28 06:49:51.06258004 +0000 UTC m=+114.032814235" Nov 28 06:49:51 crc kubenswrapper[4889]: I1128 06:49:51.331864 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:51 crc kubenswrapper[4889]: E1128 06:49:51.332143 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:52 crc kubenswrapper[4889]: I1128 06:49:52.034231 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/1.log" Nov 28 06:49:52 crc kubenswrapper[4889]: I1128 06:49:52.330793 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:52 crc kubenswrapper[4889]: I1128 06:49:52.330877 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:52 crc kubenswrapper[4889]: E1128 06:49:52.330936 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:52 crc kubenswrapper[4889]: E1128 06:49:52.331192 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:52 crc kubenswrapper[4889]: I1128 06:49:52.331798 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:52 crc kubenswrapper[4889]: E1128 06:49:52.332145 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:53 crc kubenswrapper[4889]: I1128 06:49:53.331470 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:53 crc kubenswrapper[4889]: E1128 06:49:53.331814 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:54 crc kubenswrapper[4889]: I1128 06:49:54.331158 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:54 crc kubenswrapper[4889]: I1128 06:49:54.331238 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:54 crc kubenswrapper[4889]: I1128 06:49:54.331169 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:54 crc kubenswrapper[4889]: E1128 06:49:54.331440 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:54 crc kubenswrapper[4889]: E1128 06:49:54.331580 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:54 crc kubenswrapper[4889]: E1128 06:49:54.331696 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:55 crc kubenswrapper[4889]: I1128 06:49:55.331156 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:55 crc kubenswrapper[4889]: E1128 06:49:55.331402 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:56 crc kubenswrapper[4889]: I1128 06:49:56.331340 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:56 crc kubenswrapper[4889]: I1128 06:49:56.331554 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:56 crc kubenswrapper[4889]: E1128 06:49:56.331839 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:56 crc kubenswrapper[4889]: I1128 06:49:56.331884 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:56 crc kubenswrapper[4889]: E1128 06:49:56.332105 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:56 crc kubenswrapper[4889]: E1128 06:49:56.332465 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:57 crc kubenswrapper[4889]: E1128 06:49:57.312843 4889 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 06:49:57 crc kubenswrapper[4889]: I1128 06:49:57.331603 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:57 crc kubenswrapper[4889]: E1128 06:49:57.333050 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:49:57 crc kubenswrapper[4889]: E1128 06:49:57.853774 4889 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 06:49:58 crc kubenswrapper[4889]: I1128 06:49:58.331389 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:49:58 crc kubenswrapper[4889]: I1128 06:49:58.331436 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:49:58 crc kubenswrapper[4889]: E1128 06:49:58.331577 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:49:58 crc kubenswrapper[4889]: E1128 06:49:58.331750 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:49:58 crc kubenswrapper[4889]: I1128 06:49:58.332225 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:49:58 crc kubenswrapper[4889]: E1128 06:49:58.332504 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:49:59 crc kubenswrapper[4889]: I1128 06:49:59.331191 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:49:59 crc kubenswrapper[4889]: E1128 06:49:59.331425 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:50:00 crc kubenswrapper[4889]: I1128 06:50:00.331538 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:00 crc kubenswrapper[4889]: I1128 06:50:00.331684 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:00 crc kubenswrapper[4889]: I1128 06:50:00.331666 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:00 crc kubenswrapper[4889]: E1128 06:50:00.331912 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:50:00 crc kubenswrapper[4889]: E1128 06:50:00.332243 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:50:00 crc kubenswrapper[4889]: E1128 06:50:00.332282 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:50:01 crc kubenswrapper[4889]: I1128 06:50:01.331109 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:50:01 crc kubenswrapper[4889]: E1128 06:50:01.331348 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:50:01 crc kubenswrapper[4889]: I1128 06:50:01.332286 4889 scope.go:117] "RemoveContainer" containerID="ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b" Nov 28 06:50:02 crc kubenswrapper[4889]: I1128 06:50:02.077191 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/1.log" Nov 28 06:50:02 crc kubenswrapper[4889]: I1128 06:50:02.077245 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vtjm7" event={"ID":"68ddfdcf-000e-45ae-a737-d3dd28115d5b","Type":"ContainerStarted","Data":"52ae5f5374660ca9bd0699777aa53aaebd429485f4384242509e782ae0c613a9"} Nov 28 06:50:02 crc kubenswrapper[4889]: I1128 06:50:02.331176 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:02 crc kubenswrapper[4889]: I1128 06:50:02.331295 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:02 crc kubenswrapper[4889]: I1128 06:50:02.331190 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:02 crc kubenswrapper[4889]: E1128 06:50:02.331377 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:50:02 crc kubenswrapper[4889]: E1128 06:50:02.331478 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:50:02 crc kubenswrapper[4889]: E1128 06:50:02.332116 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:50:02 crc kubenswrapper[4889]: I1128 06:50:02.332731 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 06:50:02 crc kubenswrapper[4889]: E1128 06:50:02.854761 4889 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 06:50:03 crc kubenswrapper[4889]: I1128 06:50:03.083355 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/3.log" Nov 28 06:50:03 crc kubenswrapper[4889]: I1128 06:50:03.086071 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerStarted","Data":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} Nov 28 06:50:03 crc kubenswrapper[4889]: I1128 06:50:03.086744 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:50:03 crc kubenswrapper[4889]: I1128 06:50:03.125559 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podStartSLOduration=106.125533724 podStartE2EDuration="1m46.125533724s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:03.124094804 +0000 UTC m=+126.094328999" watchObservedRunningTime="2025-11-28 06:50:03.125533724 +0000 UTC m=+126.095767879" Nov 28 06:50:03 crc kubenswrapper[4889]: I1128 06:50:03.237914 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mbrtc"] Nov 28 06:50:03 crc kubenswrapper[4889]: I1128 06:50:03.238067 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:03 crc kubenswrapper[4889]: E1128 06:50:03.238169 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:50:03 crc kubenswrapper[4889]: I1128 06:50:03.333331 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:50:03 crc kubenswrapper[4889]: E1128 06:50:03.333479 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:50:04 crc kubenswrapper[4889]: I1128 06:50:04.330899 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:04 crc kubenswrapper[4889]: I1128 06:50:04.330943 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:04 crc kubenswrapper[4889]: E1128 06:50:04.331646 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:50:04 crc kubenswrapper[4889]: E1128 06:50:04.331841 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:50:05 crc kubenswrapper[4889]: I1128 06:50:05.331043 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:05 crc kubenswrapper[4889]: I1128 06:50:05.331183 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:50:05 crc kubenswrapper[4889]: E1128 06:50:05.331271 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:50:05 crc kubenswrapper[4889]: E1128 06:50:05.331419 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:50:06 crc kubenswrapper[4889]: I1128 06:50:06.331636 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:06 crc kubenswrapper[4889]: I1128 06:50:06.331637 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:06 crc kubenswrapper[4889]: E1128 06:50:06.331895 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:50:06 crc kubenswrapper[4889]: E1128 06:50:06.332008 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:50:07 crc kubenswrapper[4889]: I1128 06:50:07.332014 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:50:07 crc kubenswrapper[4889]: I1128 06:50:07.332052 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:07 crc kubenswrapper[4889]: E1128 06:50:07.334561 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:50:07 crc kubenswrapper[4889]: E1128 06:50:07.335043 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mbrtc" podUID="e209e335-9f44-41a8-a8f2-093d2bdcfe6b" Nov 28 06:50:08 crc kubenswrapper[4889]: I1128 06:50:08.331519 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:08 crc kubenswrapper[4889]: I1128 06:50:08.331574 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:08 crc kubenswrapper[4889]: I1128 06:50:08.334684 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 06:50:08 crc kubenswrapper[4889]: I1128 06:50:08.335561 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 06:50:08 crc kubenswrapper[4889]: I1128 06:50:08.335750 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 06:50:08 crc kubenswrapper[4889]: I1128 06:50:08.335842 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 06:50:09 crc kubenswrapper[4889]: I1128 06:50:09.331128 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:50:09 crc kubenswrapper[4889]: I1128 06:50:09.331128 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:09 crc kubenswrapper[4889]: I1128 06:50:09.335205 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 06:50:09 crc kubenswrapper[4889]: I1128 06:50:09.335380 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 06:50:11 crc kubenswrapper[4889]: I1128 06:50:11.952006 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.398142 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.441018 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.442206 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.442696 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hn9w9"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.443664 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.444303 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4kcw5"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.444989 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.465762 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.465785 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.465986 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.466636 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.470123 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.470526 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.471005 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.471244 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.471657 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.471964 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.472170 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.472696 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.472881 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.472952 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.473154 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.473638 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.474133 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.475927 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.476611 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.477312 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.477625 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.484978 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bw9t7"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.485587 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bw9t7" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.485649 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p2nw8"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.486067 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.486133 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.487077 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.487337 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.487718 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.488224 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.488396 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.488436 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.492870 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9h4ng"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.493456 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mgdw9"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.493617 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.493900 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.494183 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.494479 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.495362 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ptzsm"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.496209 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.498373 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.499066 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.501794 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vbw2g"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.502436 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.508960 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a917d9bc-242b-4537-b454-edab3a6da7d6-images\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509003 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509042 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/262b0ea9-ac8b-4698-bea0-283f94e34240-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5fx7n\" (UID: \"262b0ea9-ac8b-4698-bea0-283f94e34240\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509071 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blr28\" (UniqueName: \"kubernetes.io/projected/262b0ea9-ac8b-4698-bea0-283f94e34240-kube-api-access-blr28\") pod \"cluster-samples-operator-665b6dd947-5fx7n\" (UID: \"262b0ea9-ac8b-4698-bea0-283f94e34240\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509089 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-config\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509104 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-client-ca\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509219 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a917d9bc-242b-4537-b454-edab3a6da7d6-config\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509243 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fz4l\" (UniqueName: \"kubernetes.io/projected/a917d9bc-242b-4537-b454-edab3a6da7d6-kube-api-access-4fz4l\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509260 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvll\" (UniqueName: \"kubernetes.io/projected/32d7045a-59bd-4637-9365-be7ca63fab06-kube-api-access-8qvll\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509277 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a917d9bc-242b-4537-b454-edab3a6da7d6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.509295 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32d7045a-59bd-4637-9365-be7ca63fab06-serving-cert\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.511498 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.511776 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.512134 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.512433 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.512564 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lcslc"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.512601 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.512922 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.513005 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.513246 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.513333 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.513443 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.519314 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjpk7"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.520190 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.520693 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.521062 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.521300 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.521606 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.521802 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.522049 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.522185 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.522304 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.522725 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.523398 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmplj"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.524378 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.524636 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525208 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525270 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525307 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525346 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.527092 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525375 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525376 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525533 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.529894 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530007 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530151 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530227 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530312 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530399 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530419 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525573 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525607 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525643 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530782 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.531067 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.531176 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.541909 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.542358 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.542530 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.542694 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.542857 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.543026 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.543169 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.543371 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.543554 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.543722 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525675 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.543849 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.543955 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.525744 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530617 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.545405 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.545741 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.545889 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.546501 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.546856 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.547245 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.547282 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.547428 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.547855 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.549025 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.549280 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.549432 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.549784 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.550063 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.550311 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.530191 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.551218 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.551434 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.552891 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.545425 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.553483 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.554789 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.555015 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.555283 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.560480 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.569104 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.569773 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.570377 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.597802 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-skddf"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.598535 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.599579 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tsgkn"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.600277 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.600644 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.600912 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.602390 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.604239 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.604897 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.605259 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.605394 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.605723 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.606838 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.609637 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4kcw5"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.609999 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610200 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hml6b\" (UniqueName: \"kubernetes.io/projected/11e5ff8e-7175-4c44-a641-e01582ee0e38-kube-api-access-hml6b\") pod \"dns-operator-744455d44c-gmplj\" (UID: \"11e5ff8e-7175-4c44-a641-e01582ee0e38\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610238 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7491558-b178-467b-9a43-f41ef8f00f9e-serving-cert\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610290 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a917d9bc-242b-4537-b454-edab3a6da7d6-config\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610312 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610337 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5kb8\" (UniqueName: \"kubernetes.io/projected/f7491558-b178-467b-9a43-f41ef8f00f9e-kube-api-access-m5kb8\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610375 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a917d9bc-242b-4537-b454-edab3a6da7d6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610399 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32d7045a-59bd-4637-9365-be7ca63fab06-serving-cert\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610425 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-config\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610445 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610468 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610484 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610567 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-client\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610598 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0921de10-3f1e-4264-b771-90c1b1e1ddbc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.610633 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl7v2\" (UniqueName: \"kubernetes.io/projected/8789adc8-7db9-46c9-994b-b5be723cc076-kube-api-access-sl7v2\") pod \"downloads-7954f5f757-bw9t7\" (UID: \"8789adc8-7db9-46c9-994b-b5be723cc076\") " pod="openshift-console/downloads-7954f5f757-bw9t7" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.611461 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.613641 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hn9w9"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.613716 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cc6md"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.614287 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.614348 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.618615 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.614753 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-config\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.636661 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.636720 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.636763 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-encryption-config\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.636798 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-image-import-ca\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.636822 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11e5ff8e-7175-4c44-a641-e01582ee0e38-metrics-tls\") pod \"dns-operator-744455d44c-gmplj\" (UID: \"11e5ff8e-7175-4c44-a641-e01582ee0e38\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.614442 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638081 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a917d9bc-242b-4537-b454-edab3a6da7d6-config\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.619830 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32d7045a-59bd-4637-9365-be7ca63fab06-serving-cert\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638231 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/262b0ea9-ac8b-4698-bea0-283f94e34240-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5fx7n\" (UID: \"262b0ea9-ac8b-4698-bea0-283f94e34240\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638268 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktb7c\" (UniqueName: \"kubernetes.io/projected/aca1ea5e-ae14-45a8-9a19-acaea4176a13-kube-api-access-ktb7c\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638293 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-config\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638325 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00834fa5-849b-48d5-984e-7526dc4f71b4-serving-cert\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.622963 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638349 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07339f94-9b18-4cdb-9e19-5068a64c5bd7-config\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.619924 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638378 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638402 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638445 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-client-ca\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638468 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30b525af-4632-4fe9-bdd7-6ca436cedeb7-node-pullsecrets\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638509 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-serving-cert\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638533 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-encryption-config\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638562 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-service-ca\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.620018 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638588 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae59079-e71d-4cb5-960d-6bafe6f27d81-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638611 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638636 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-etcd-serving-ca\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638659 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-serving-cert\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.620082 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-client-ca\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638917 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppfs\" (UniqueName: \"kubernetes.io/projected/0ae59079-e71d-4cb5-960d-6bafe6f27d81-kube-api-access-xppfs\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638941 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5qvk\" (UniqueName: \"kubernetes.io/projected/00834fa5-849b-48d5-984e-7526dc4f71b4-kube-api-access-d5qvk\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638969 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07339f94-9b18-4cdb-9e19-5068a64c5bd7-trusted-ca\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.638992 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-policies\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639023 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30b525af-4632-4fe9-bdd7-6ca436cedeb7-audit-dir\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639067 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-config\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639113 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgh5\" (UniqueName: \"kubernetes.io/projected/5c2857b5-4c19-4889-915a-1477fc6ce9c6-kube-api-access-mvgh5\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639295 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdvm\" (UniqueName: \"kubernetes.io/projected/0921de10-3f1e-4264-b771-90c1b1e1ddbc-kube-api-access-8zdvm\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639315 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-trusted-ca-bundle\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639340 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-audit\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639363 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-oauth-config\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639394 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fz4l\" (UniqueName: \"kubernetes.io/projected/a917d9bc-242b-4537-b454-edab3a6da7d6-kube-api-access-4fz4l\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639416 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qvll\" (UniqueName: \"kubernetes.io/projected/32d7045a-59bd-4637-9365-be7ca63fab06-kube-api-access-8qvll\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639453 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2857b5-4c19-4889-915a-1477fc6ce9c6-serving-cert\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.624353 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639671 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7z7k\" (UniqueName: \"kubernetes.io/projected/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-kube-api-access-m7z7k\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639728 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0921de10-3f1e-4264-b771-90c1b1e1ddbc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639754 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639778 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639808 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwkg\" (UniqueName: \"kubernetes.io/projected/7b21e995-d113-4b15-b59e-1ba217a862bc-kube-api-access-xnwkg\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639840 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tpqn\" (UniqueName: \"kubernetes.io/projected/07339f94-9b18-4cdb-9e19-5068a64c5bd7-kube-api-access-8tpqn\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.639867 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-dir\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.624984 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640093 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8502f12d-fa3b-441f-b96d-e33d236f8131-serving-cert\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640124 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-etcd-client\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640153 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a917d9bc-242b-4537-b454-edab3a6da7d6-images\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640177 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-ca\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640202 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae59079-e71d-4cb5-960d-6bafe6f27d81-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640205 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640227 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkn5\" (UniqueName: \"kubernetes.io/projected/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-kube-api-access-bhkn5\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.636212 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640275 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640304 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640541 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2f6\" (UniqueName: \"kubernetes.io/projected/8502f12d-fa3b-441f-b96d-e33d236f8131-kube-api-access-vg2f6\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640569 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07339f94-9b18-4cdb-9e19-5068a64c5bd7-serving-cert\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640592 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640618 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-oauth-serving-cert\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640639 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640658 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-serving-cert\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.640680 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-config\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.641779 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-client-ca\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.621324 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a917d9bc-242b-4537-b454-edab3a6da7d6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.643156 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-service-ca\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.646574 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.647400 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.647487 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.647944 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a917d9bc-242b-4537-b454-edab3a6da7d6-images\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.648737 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.651309 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.656985 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blr28\" (UniqueName: \"kubernetes.io/projected/262b0ea9-ac8b-4698-bea0-283f94e34240-kube-api-access-blr28\") pod \"cluster-samples-operator-665b6dd947-5fx7n\" (UID: \"262b0ea9-ac8b-4698-bea0-283f94e34240\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.657132 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00834fa5-849b-48d5-984e-7526dc4f71b4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.657175 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vp8q\" (UniqueName: \"kubernetes.io/projected/40f4d399-8f92-4d2f-afa4-8f460aff4348-kube-api-access-5vp8q\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.657250 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-etcd-client\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.657287 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.657802 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.657948 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-config\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.658031 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b21e995-d113-4b15-b59e-1ba217a862bc-audit-dir\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.658069 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-service-ca-bundle\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.658115 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-audit-policies\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.658138 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.658137 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.658187 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2tcxh"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.658236 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.658359 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.659352 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.659499 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/262b0ea9-ac8b-4698-bea0-283f94e34240-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5fx7n\" (UID: \"262b0ea9-ac8b-4698-bea0-283f94e34240\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.659564 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.660007 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-config\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.660383 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.661864 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.660584 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.662438 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6zxb8"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.664658 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.665387 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.665939 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p2nw8"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.666861 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nkwz\" (UniqueName: \"kubernetes.io/projected/30b525af-4632-4fe9-bdd7-6ca436cedeb7-kube-api-access-9nkwz\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.667106 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.667245 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.671415 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.675832 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.679105 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.680831 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bw9t7"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.682391 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.683130 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.687527 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmplj"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.687612 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bj5j7"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.688506 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gxwdj"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.688728 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bj5j7" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.690072 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.690407 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vbw2g"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.691369 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.691904 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.692799 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ptzsm"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.693986 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.695413 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.697575 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lcslc"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.698006 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9h4ng"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.698261 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.699610 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.700559 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mgdw9"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.701857 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.703389 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjpk7"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.704361 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.704618 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.705949 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.707570 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.708217 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.709350 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.711195 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.712116 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cc6md"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.713162 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-v55wm"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.714040 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.714336 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-skddf"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.715966 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.716987 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.718055 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.719559 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bj5j7"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.720151 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2tcxh"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.721225 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.722314 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-v55wm"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.723305 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.724349 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.724621 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.725526 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gxwdj"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.726696 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6zxb8"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.727576 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.728558 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-v8mjg"] Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.729201 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.744946 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.765183 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.768793 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae59079-e71d-4cb5-960d-6bafe6f27d81-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.768848 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.768886 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-etcd-serving-ca\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.768914 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-serving-cert\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769036 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-service-ca\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769087 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xppfs\" (UniqueName: \"kubernetes.io/projected/0ae59079-e71d-4cb5-960d-6bafe6f27d81-kube-api-access-xppfs\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769114 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5qvk\" (UniqueName: \"kubernetes.io/projected/00834fa5-849b-48d5-984e-7526dc4f71b4-kube-api-access-d5qvk\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769137 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07339f94-9b18-4cdb-9e19-5068a64c5bd7-trusted-ca\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769238 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-policies\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769270 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-config\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769295 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30b525af-4632-4fe9-bdd7-6ca436cedeb7-audit-dir\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769321 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgh5\" (UniqueName: \"kubernetes.io/projected/5c2857b5-4c19-4889-915a-1477fc6ce9c6-kube-api-access-mvgh5\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769351 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdvm\" (UniqueName: \"kubernetes.io/projected/0921de10-3f1e-4264-b771-90c1b1e1ddbc-kube-api-access-8zdvm\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769388 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-audit\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769414 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-trusted-ca-bundle\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769463 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-oauth-config\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769501 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0921de10-3f1e-4264-b771-90c1b1e1ddbc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769527 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769552 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2857b5-4c19-4889-915a-1477fc6ce9c6-serving-cert\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769578 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7z7k\" (UniqueName: \"kubernetes.io/projected/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-kube-api-access-m7z7k\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769602 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwkg\" (UniqueName: \"kubernetes.io/projected/7b21e995-d113-4b15-b59e-1ba217a862bc-kube-api-access-xnwkg\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769627 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tpqn\" (UniqueName: \"kubernetes.io/projected/07339f94-9b18-4cdb-9e19-5068a64c5bd7-kube-api-access-8tpqn\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769650 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-dir\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769673 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769696 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-etcd-client\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769750 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8502f12d-fa3b-441f-b96d-e33d236f8131-serving-cert\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770056 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-ca\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770089 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae59079-e71d-4cb5-960d-6bafe6f27d81-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770116 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhkn5\" (UniqueName: \"kubernetes.io/projected/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-kube-api-access-bhkn5\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770146 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770172 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2f6\" (UniqueName: \"kubernetes.io/projected/8502f12d-fa3b-441f-b96d-e33d236f8131-kube-api-access-vg2f6\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770196 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07339f94-9b18-4cdb-9e19-5068a64c5bd7-serving-cert\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770201 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-audit\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770227 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-oauth-serving-cert\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770280 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770314 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-serving-cert\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770334 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770360 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770382 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-config\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770405 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-service-ca\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770436 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00834fa5-849b-48d5-984e-7526dc4f71b4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770457 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vp8q\" (UniqueName: \"kubernetes.io/projected/40f4d399-8f92-4d2f-afa4-8f460aff4348-kube-api-access-5vp8q\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770478 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-etcd-client\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770497 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770524 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b21e995-d113-4b15-b59e-1ba217a862bc-audit-dir\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770547 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-service-ca-bundle\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770565 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770587 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-audit-policies\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770605 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770625 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770650 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nkwz\" (UniqueName: \"kubernetes.io/projected/30b525af-4632-4fe9-bdd7-6ca436cedeb7-kube-api-access-9nkwz\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770672 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770691 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770737 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hml6b\" (UniqueName: \"kubernetes.io/projected/11e5ff8e-7175-4c44-a641-e01582ee0e38-kube-api-access-hml6b\") pod \"dns-operator-744455d44c-gmplj\" (UID: \"11e5ff8e-7175-4c44-a641-e01582ee0e38\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770755 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7491558-b178-467b-9a43-f41ef8f00f9e-serving-cert\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770776 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770793 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5kb8\" (UniqueName: \"kubernetes.io/projected/f7491558-b178-467b-9a43-f41ef8f00f9e-kube-api-access-m5kb8\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.770894 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-dir\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769697 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae59079-e71d-4cb5-960d-6bafe6f27d81-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.769978 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-etcd-serving-ca\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.771231 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.771250 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30b525af-4632-4fe9-bdd7-6ca436cedeb7-audit-dir\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.771384 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-trusted-ca-bundle\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.772248 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.772382 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-oauth-serving-cert\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.772835 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.772836 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.772877 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b21e995-d113-4b15-b59e-1ba217a862bc-audit-dir\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.772916 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-service-ca-bundle\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.773011 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-config\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.773530 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-ca\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.774043 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.774276 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.774650 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b21e995-d113-4b15-b59e-1ba217a862bc-audit-policies\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.774869 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-serving-cert\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.775390 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-policies\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.775453 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-service-ca\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.775756 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-config\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.776211 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-service-ca\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.776409 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07339f94-9b18-4cdb-9e19-5068a64c5bd7-trusted-ca\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.776726 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00834fa5-849b-48d5-984e-7526dc4f71b4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.776992 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.777359 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2857b5-4c19-4889-915a-1477fc6ce9c6-serving-cert\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.777399 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-etcd-client\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.777511 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.777557 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae59079-e71d-4cb5-960d-6bafe6f27d81-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.777572 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-config\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.777608 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.777854 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.777995 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0921de10-3f1e-4264-b771-90c1b1e1ddbc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.778007 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.778065 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl7v2\" (UniqueName: \"kubernetes.io/projected/8789adc8-7db9-46c9-994b-b5be723cc076-kube-api-access-sl7v2\") pod \"downloads-7954f5f757-bw9t7\" (UID: \"8789adc8-7db9-46c9-994b-b5be723cc076\") " pod="openshift-console/downloads-7954f5f757-bw9t7" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.778355 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-client\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.778433 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-config\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.778507 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-config\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.778544 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07339f94-9b18-4cdb-9e19-5068a64c5bd7-serving-cert\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.778950 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0921de10-3f1e-4264-b771-90c1b1e1ddbc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779076 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0921de10-3f1e-4264-b771-90c1b1e1ddbc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779142 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779232 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8502f12d-fa3b-441f-b96d-e33d236f8131-serving-cert\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779186 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779349 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-encryption-config\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779456 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-config\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779533 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-image-import-ca\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779749 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktb7c\" (UniqueName: \"kubernetes.io/projected/aca1ea5e-ae14-45a8-9a19-acaea4176a13-kube-api-access-ktb7c\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779795 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-config\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779897 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11e5ff8e-7175-4c44-a641-e01582ee0e38-metrics-tls\") pod \"dns-operator-744455d44c-gmplj\" (UID: \"11e5ff8e-7175-4c44-a641-e01582ee0e38\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.779960 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-etcd-client\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780039 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780087 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780138 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00834fa5-849b-48d5-984e-7526dc4f71b4-serving-cert\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780176 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07339f94-9b18-4cdb-9e19-5068a64c5bd7-config\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780223 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-client-ca\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780262 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30b525af-4632-4fe9-bdd7-6ca436cedeb7-node-pullsecrets\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780300 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-encryption-config\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780361 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-serving-cert\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780510 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2857b5-4c19-4889-915a-1477fc6ce9c6-config\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.780668 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.781118 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.781172 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-oauth-config\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.781510 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c2857b5-4c19-4889-915a-1477fc6ce9c6-etcd-client\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.781596 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30b525af-4632-4fe9-bdd7-6ca436cedeb7-node-pullsecrets\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.781950 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.782000 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7491558-b178-467b-9a43-f41ef8f00f9e-serving-cert\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.782511 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-client-ca\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.782690 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7b21e995-d113-4b15-b59e-1ba217a862bc-encryption-config\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.782947 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/30b525af-4632-4fe9-bdd7-6ca436cedeb7-image-import-ca\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.783164 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.783437 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07339f94-9b18-4cdb-9e19-5068a64c5bd7-config\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.784037 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.784462 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.785066 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.785582 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7491558-b178-467b-9a43-f41ef8f00f9e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.785695 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-serving-cert\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.785835 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-serving-cert\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.787250 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.787322 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00834fa5-849b-48d5-984e-7526dc4f71b4-serving-cert\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.788934 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30b525af-4632-4fe9-bdd7-6ca436cedeb7-encryption-config\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.805009 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.825334 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.844285 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.855009 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11e5ff8e-7175-4c44-a641-e01582ee0e38-metrics-tls\") pod \"dns-operator-744455d44c-gmplj\" (UID: \"11e5ff8e-7175-4c44-a641-e01582ee0e38\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.865029 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.904482 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.910270 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.925086 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.945480 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.954117 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.964559 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 06:50:13 crc kubenswrapper[4889]: I1128 06:50:13.985058 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.025612 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.045408 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.064767 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.084488 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.105154 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.126078 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.145497 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.164809 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.198423 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.205206 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.225455 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.246302 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.266883 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.285735 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.305673 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.324996 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.346070 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.365309 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.385827 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.405406 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.424951 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.445415 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.464421 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.485206 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.504981 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.524389 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.544575 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.565215 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.585520 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.605684 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.623606 4889 request.go:700] Waited for 1.012928917s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.626496 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.645264 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.667627 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.685264 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.704930 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.724927 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.760316 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fz4l\" (UniqueName: \"kubernetes.io/projected/a917d9bc-242b-4537-b454-edab3a6da7d6-kube-api-access-4fz4l\") pod \"machine-api-operator-5694c8668f-hn9w9\" (UID: \"a917d9bc-242b-4537-b454-edab3a6da7d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.783189 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qvll\" (UniqueName: \"kubernetes.io/projected/32d7045a-59bd-4637-9365-be7ca63fab06-kube-api-access-8qvll\") pod \"controller-manager-879f6c89f-4kcw5\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.784986 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.805101 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.825338 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.844473 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.865216 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.904755 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.905109 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blr28\" (UniqueName: \"kubernetes.io/projected/262b0ea9-ac8b-4698-bea0-283f94e34240-kube-api-access-blr28\") pod \"cluster-samples-operator-665b6dd947-5fx7n\" (UID: \"262b0ea9-ac8b-4698-bea0-283f94e34240\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.925144 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.944596 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.971791 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.984932 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 06:50:14 crc kubenswrapper[4889]: I1128 06:50:14.989177 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.001925 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.005329 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.012208 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.025416 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.046211 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.065826 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.088030 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.105428 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.125368 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.145294 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.166110 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.184998 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.205069 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.225106 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.245657 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.264817 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.285376 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.304690 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.324782 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.345642 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.364606 4889 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.385483 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.405604 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.424854 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.445651 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.464540 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.484384 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.506064 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.525941 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.545376 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.601630 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tpqn\" (UniqueName: \"kubernetes.io/projected/07339f94-9b18-4cdb-9e19-5068a64c5bd7-kube-api-access-8tpqn\") pod \"console-operator-58897d9998-vbw2g\" (UID: \"07339f94-9b18-4cdb-9e19-5068a64c5bd7\") " pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.621345 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2f6\" (UniqueName: \"kubernetes.io/projected/8502f12d-fa3b-441f-b96d-e33d236f8131-kube-api-access-vg2f6\") pod \"route-controller-manager-6576b87f9c-sl2sc\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.642944 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgh5\" (UniqueName: \"kubernetes.io/projected/5c2857b5-4c19-4889-915a-1477fc6ce9c6-kube-api-access-mvgh5\") pod \"etcd-operator-b45778765-lcslc\" (UID: \"5c2857b5-4c19-4889-915a-1477fc6ce9c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.643039 4889 request.go:700] Waited for 1.871594575s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.659485 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdvm\" (UniqueName: \"kubernetes.io/projected/0921de10-3f1e-4264-b771-90c1b1e1ddbc-kube-api-access-8zdvm\") pod \"openshift-controller-manager-operator-756b6f6bc6-ljn4s\" (UID: \"0921de10-3f1e-4264-b771-90c1b1e1ddbc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.683072 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5kb8\" (UniqueName: \"kubernetes.io/projected/f7491558-b178-467b-9a43-f41ef8f00f9e-kube-api-access-m5kb8\") pod \"authentication-operator-69f744f599-p2nw8\" (UID: \"f7491558-b178-467b-9a43-f41ef8f00f9e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.702998 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nkwz\" (UniqueName: \"kubernetes.io/projected/30b525af-4632-4fe9-bdd7-6ca436cedeb7-kube-api-access-9nkwz\") pod \"apiserver-76f77b778f-ptzsm\" (UID: \"30b525af-4632-4fe9-bdd7-6ca436cedeb7\") " pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.718219 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhkn5\" (UniqueName: \"kubernetes.io/projected/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-kube-api-access-bhkn5\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.718383 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.718516 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.743853 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hml6b\" (UniqueName: \"kubernetes.io/projected/11e5ff8e-7175-4c44-a641-e01582ee0e38-kube-api-access-hml6b\") pod \"dns-operator-744455d44c-gmplj\" (UID: \"11e5ff8e-7175-4c44-a641-e01582ee0e38\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.762173 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5qvk\" (UniqueName: \"kubernetes.io/projected/00834fa5-849b-48d5-984e-7526dc4f71b4-kube-api-access-d5qvk\") pod \"openshift-config-operator-7777fb866f-6bc8m\" (UID: \"00834fa5-849b-48d5-984e-7526dc4f71b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.780220 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.782144 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84fe2977-46b7-4f86-91ed-6e03bd0a43f6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jhr6k\" (UID: \"84fe2977-46b7-4f86-91ed-6e03bd0a43f6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.799626 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xppfs\" (UniqueName: \"kubernetes.io/projected/0ae59079-e71d-4cb5-960d-6bafe6f27d81-kube-api-access-xppfs\") pod \"openshift-apiserver-operator-796bbdcf4f-fqbsw\" (UID: \"0ae59079-e71d-4cb5-960d-6bafe6f27d81\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.803525 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.805570 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.819608 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.824336 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7z7k\" (UniqueName: \"kubernetes.io/projected/372ce92b-75d0-4fc9-b6d0-07962d7a2dfc-kube-api-access-m7z7k\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwbzc\" (UID: \"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.826851 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.838370 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vp8q\" (UniqueName: \"kubernetes.io/projected/40f4d399-8f92-4d2f-afa4-8f460aff4348-kube-api-access-5vp8q\") pod \"oauth-openshift-558db77b4-mgdw9\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.840977 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.856251 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.862342 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl7v2\" (UniqueName: \"kubernetes.io/projected/8789adc8-7db9-46c9-994b-b5be723cc076-kube-api-access-sl7v2\") pod \"downloads-7954f5f757-bw9t7\" (UID: \"8789adc8-7db9-46c9-994b-b5be723cc076\") " pod="openshift-console/downloads-7954f5f757-bw9t7" Nov 28 06:50:15 crc kubenswrapper[4889]: I1128 06:50:15.883122 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktb7c\" (UniqueName: \"kubernetes.io/projected/aca1ea5e-ae14-45a8-9a19-acaea4176a13-kube-api-access-ktb7c\") pod \"console-f9d7485db-9h4ng\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.864022 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.866467 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.866990 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bw9t7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.867799 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwkg\" (UniqueName: \"kubernetes.io/projected/7b21e995-d113-4b15-b59e-1ba217a862bc-kube-api-access-xnwkg\") pod \"apiserver-7bbb656c7d-7qh5b\" (UID: \"7b21e995-d113-4b15-b59e-1ba217a862bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.868743 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.869867 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.870209 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.878571 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgs6w\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-kube-api-access-pgs6w\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.880879 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.881152 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.881373 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-tls\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.881611 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-certificates\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.881855 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqbqc\" (UniqueName: \"kubernetes.io/projected/28ce1e35-8647-4feb-8375-c33c20284687-kube-api-access-tqbqc\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.881976 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ce1e35-8647-4feb-8375-c33c20284687-config\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.882071 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-trusted-ca\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.882225 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-bound-sa-token\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.882324 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/28ce1e35-8647-4feb-8375-c33c20284687-machine-approver-tls\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.882445 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28ce1e35-8647-4feb-8375-c33c20284687-auth-proxy-config\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:16 crc kubenswrapper[4889]: E1128 06:50:16.882573 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.382543981 +0000 UTC m=+140.352778136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.882636 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.906179 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n"] Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.910144 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4kcw5"] Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.917668 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hn9w9"] Nov 28 06:50:16 crc kubenswrapper[4889]: W1128 06:50:16.945852 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32d7045a_59bd_4637_9365_be7ca63fab06.slice/crio-aba17c347eb645aadbc6b35b47af72362e2e257f680bac1ba49e9acf85af23c0 WatchSource:0}: Error finding container aba17c347eb645aadbc6b35b47af72362e2e257f680bac1ba49e9acf85af23c0: Status 404 returned error can't find the container with id aba17c347eb645aadbc6b35b47af72362e2e257f680bac1ba49e9acf85af23c0 Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.984110 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:16 crc kubenswrapper[4889]: E1128 06:50:16.984271 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.484231879 +0000 UTC m=+140.454466034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.984341 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d508ac9a-385b-4485-b51f-58b92753b7e0-serving-cert\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.984391 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa47f84-001a-45ca-82ec-18b0e2917f5f-config\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.984412 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faf42878-840c-430d-b687-9a45b056b3b4-config-volume\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.984432 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kf79\" (UniqueName: \"kubernetes.io/projected/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-kube-api-access-6kf79\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.984460 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42afcdee-a73a-4865-adca-6a86a1dc81ee-srv-cert\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.986726 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e99c0dd-5d4f-4796-a04c-72b448a33f31-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.986761 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4c52cc3-42e8-419b-9f75-779c3279be2d-apiservice-cert\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.986787 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4608d26-293f-4f40-b3eb-7e44f9e490e8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d2nbw\" (UID: \"e4608d26-293f-4f40-b3eb-7e44f9e490e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987502 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-certificates\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987529 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqbqc\" (UniqueName: \"kubernetes.io/projected/28ce1e35-8647-4feb-8375-c33c20284687-kube-api-access-tqbqc\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987567 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dskxw\" (UniqueName: \"kubernetes.io/projected/26337077-30a6-4855-9c14-4b0bece1353e-kube-api-access-dskxw\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987592 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-csi-data-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987620 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26337077-30a6-4855-9c14-4b0bece1353e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987637 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41f10989-ef30-4194-9ff6-47f75389101c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987655 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-srv-cert\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987693 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987769 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ce1e35-8647-4feb-8375-c33c20284687-config\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987790 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvqqk\" (UniqueName: \"kubernetes.io/projected/60853e4e-b79e-4597-84fe-a051efbbeaff-kube-api-access-qvqqk\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987808 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-trusted-ca\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987832 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/faf42878-840c-430d-b687-9a45b056b3b4-metrics-tls\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987859 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-plugins-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987888 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa47f84-001a-45ca-82ec-18b0e2917f5f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987914 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/41f10989-ef30-4194-9ff6-47f75389101c-images\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987932 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2n2c\" (UniqueName: \"kubernetes.io/projected/483479b4-efee-46d9-b4b9-c126ea3280df-kube-api-access-d2n2c\") pod \"multus-admission-controller-857f4d67dd-cc6md\" (UID: \"483479b4-efee-46d9-b4b9-c126ea3280df\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987961 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60853e4e-b79e-4597-84fe-a051efbbeaff-config-volume\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.987981 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzf8w\" (UniqueName: \"kubernetes.io/projected/e4608d26-293f-4f40-b3eb-7e44f9e490e8-kube-api-access-nzf8w\") pod \"package-server-manager-789f6589d5-d2nbw\" (UID: \"e4608d26-293f-4f40-b3eb-7e44f9e490e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988001 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9505fe40-d6f4-40f5-b555-486eddeeefd5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r2h7q\" (UID: \"9505fe40-d6f4-40f5-b555-486eddeeefd5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988023 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-trusted-ca\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988039 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fda4e213-c731-46b3-8640-c41d9f61f81d-certs\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988112 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/28ce1e35-8647-4feb-8375-c33c20284687-machine-approver-tls\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988131 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d508ac9a-385b-4485-b51f-58b92753b7e0-config\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988148 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkljp\" (UniqueName: \"kubernetes.io/projected/41f10989-ef30-4194-9ff6-47f75389101c-kube-api-access-pkljp\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988163 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-socket-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988212 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c036ec-3275-4504-b287-6edf545c77fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988238 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988259 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-metrics-tls\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988276 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/41f10989-ef30-4194-9ff6-47f75389101c-proxy-tls\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988293 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4c52cc3-42e8-419b-9f75-779c3279be2d-webhook-cert\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988307 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745cr\" (UniqueName: \"kubernetes.io/projected/faf42878-840c-430d-b687-9a45b056b3b4-kube-api-access-745cr\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988324 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77c036ec-3275-4504-b287-6edf545c77fb-config\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988339 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988358 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/483479b4-efee-46d9-b4b9-c126ea3280df-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cc6md\" (UID: \"483479b4-efee-46d9-b4b9-c126ea3280df\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988388 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-metrics-certs\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988416 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkj8b\" (UniqueName: \"kubernetes.io/projected/d508ac9a-385b-4485-b51f-58b92753b7e0-kube-api-access-wkj8b\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988438 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-kube-api-access-cdhmx\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988454 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e99c0dd-5d4f-4796-a04c-72b448a33f31-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988480 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988563 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwhb4\" (UniqueName: \"kubernetes.io/projected/d215d9bb-b11b-434c-af59-42398990f8c6-kube-api-access-dwhb4\") pod \"migrator-59844c95c7-4t4td\" (UID: \"d215d9bb-b11b-434c-af59-42398990f8c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988581 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgs6w\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-kube-api-access-pgs6w\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988596 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-stats-auth\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988624 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-registration-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988653 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988671 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fda4e213-c731-46b3-8640-c41d9f61f81d-node-bootstrap-token\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988799 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-mountpoint-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988823 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-signing-cabundle\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988843 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rbb\" (UniqueName: \"kubernetes.io/projected/0f2bc3c9-6717-4a58-a24a-275266f6b948-kube-api-access-l9rbb\") pod \"ingress-canary-bj5j7\" (UID: \"0f2bc3c9-6717-4a58-a24a-275266f6b948\") " pod="openshift-ingress-canary/ingress-canary-bj5j7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988885 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.988987 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-tls\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.989073 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-certificates\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.989352 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77c036ec-3275-4504-b287-6edf545c77fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.989571 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eb8e8a-2974-4453-a266-988fe75852d6-service-ca-bundle\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.989642 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f2bc3c9-6717-4a58-a24a-275266f6b948-cert\") pod \"ingress-canary-bj5j7\" (UID: \"0f2bc3c9-6717-4a58-a24a-275266f6b948\") " pod="openshift-ingress-canary/ingress-canary-bj5j7" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.990282 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxw59\" (UniqueName: \"kubernetes.io/projected/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-kube-api-access-vxw59\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:16 crc kubenswrapper[4889]: I1128 06:50:16.990735 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrz6b\" (UniqueName: \"kubernetes.io/projected/fda4e213-c731-46b3-8640-c41d9f61f81d-kube-api-access-xrz6b\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:16 crc kubenswrapper[4889]: E1128 06:50:16.992926 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.49290701 +0000 UTC m=+140.463141165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:16 crc kubenswrapper[4889]: W1128 06:50:16.995878 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda917d9bc_242b_4537_b454_edab3a6da7d6.slice/crio-83bd1f063aa69c05a8b1aeb6c5e6b23652cd6786ae04dac1e2aadaedf798428a WatchSource:0}: Error finding container 83bd1f063aa69c05a8b1aeb6c5e6b23652cd6786ae04dac1e2aadaedf798428a: Status 404 returned error can't find the container with id 83bd1f063aa69c05a8b1aeb6c5e6b23652cd6786ae04dac1e2aadaedf798428a Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.000063 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/28ce1e35-8647-4feb-8375-c33c20284687-machine-approver-tls\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.000146 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtf2g\" (UniqueName: \"kubernetes.io/projected/fedcbacb-0096-4b5f-83da-23a7af142d37-kube-api-access-mtf2g\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.000256 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aa47f84-001a-45ca-82ec-18b0e2917f5f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.000580 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-bound-sa-token\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.000822 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60853e4e-b79e-4597-84fe-a051efbbeaff-secret-volume\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.000871 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c4c52cc3-42e8-419b-9f75-779c3279be2d-tmpfs\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.000924 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42afcdee-a73a-4865-adca-6a86a1dc81ee-profile-collector-cert\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.000965 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-signing-key\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001001 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpws5\" (UniqueName: \"kubernetes.io/projected/42afcdee-a73a-4865-adca-6a86a1dc81ee-kube-api-access-cpws5\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001047 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28ce1e35-8647-4feb-8375-c33c20284687-auth-proxy-config\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001241 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tp6\" (UniqueName: \"kubernetes.io/projected/9505fe40-d6f4-40f5-b555-486eddeeefd5-kube-api-access-v8tp6\") pod \"control-plane-machine-set-operator-78cbb6b69f-r2h7q\" (UID: \"9505fe40-d6f4-40f5-b555-486eddeeefd5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001290 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001315 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-default-certificate\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001360 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99dz\" (UniqueName: \"kubernetes.io/projected/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-kube-api-access-d99dz\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001431 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e99c0dd-5d4f-4796-a04c-72b448a33f31-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001523 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26337077-30a6-4855-9c14-4b0bece1353e-proxy-tls\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.001565 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdszs\" (UniqueName: \"kubernetes.io/projected/c4c52cc3-42e8-419b-9f75-779c3279be2d-kube-api-access-qdszs\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.002244 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ce1e35-8647-4feb-8375-c33c20284687-config\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.003220 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28ce1e35-8647-4feb-8375-c33c20284687-auth-proxy-config\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.003507 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.005688 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-trusted-ca\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.005790 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvg44\" (UniqueName: \"kubernetes.io/projected/06eb8e8a-2974-4453-a266-988fe75852d6-kube-api-access-vvg44\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.014000 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-tls\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.014791 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.019993 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqbqc\" (UniqueName: \"kubernetes.io/projected/28ce1e35-8647-4feb-8375-c33c20284687-kube-api-access-tqbqc\") pod \"machine-approver-56656f9798-9xmk5\" (UID: \"28ce1e35-8647-4feb-8375-c33c20284687\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.022859 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgs6w\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-kube-api-access-pgs6w\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.026481 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-bound-sa-token\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.113101 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.116224 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.117198 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.617151038 +0000 UTC m=+140.587385193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.116651 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c4c52cc3-42e8-419b-9f75-779c3279be2d-tmpfs\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118730 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42afcdee-a73a-4865-adca-6a86a1dc81ee-profile-collector-cert\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118752 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-signing-key\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118781 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpws5\" (UniqueName: \"kubernetes.io/projected/42afcdee-a73a-4865-adca-6a86a1dc81ee-kube-api-access-cpws5\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118812 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tp6\" (UniqueName: \"kubernetes.io/projected/9505fe40-d6f4-40f5-b555-486eddeeefd5-kube-api-access-v8tp6\") pod \"control-plane-machine-set-operator-78cbb6b69f-r2h7q\" (UID: \"9505fe40-d6f4-40f5-b555-486eddeeefd5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118831 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-default-certificate\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118855 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99dz\" (UniqueName: \"kubernetes.io/projected/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-kube-api-access-d99dz\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118875 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e99c0dd-5d4f-4796-a04c-72b448a33f31-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118895 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26337077-30a6-4855-9c14-4b0bece1353e-proxy-tls\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118917 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdszs\" (UniqueName: \"kubernetes.io/projected/c4c52cc3-42e8-419b-9f75-779c3279be2d-kube-api-access-qdszs\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118932 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvg44\" (UniqueName: \"kubernetes.io/projected/06eb8e8a-2974-4453-a266-988fe75852d6-kube-api-access-vvg44\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118955 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d508ac9a-385b-4485-b51f-58b92753b7e0-serving-cert\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118972 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa47f84-001a-45ca-82ec-18b0e2917f5f-config\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.118991 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42afcdee-a73a-4865-adca-6a86a1dc81ee-srv-cert\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119009 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faf42878-840c-430d-b687-9a45b056b3b4-config-volume\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119027 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kf79\" (UniqueName: \"kubernetes.io/projected/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-kube-api-access-6kf79\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119048 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4c52cc3-42e8-419b-9f75-779c3279be2d-apiservice-cert\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119066 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e99c0dd-5d4f-4796-a04c-72b448a33f31-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119085 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4608d26-293f-4f40-b3eb-7e44f9e490e8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d2nbw\" (UID: \"e4608d26-293f-4f40-b3eb-7e44f9e490e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119107 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dskxw\" (UniqueName: \"kubernetes.io/projected/26337077-30a6-4855-9c14-4b0bece1353e-kube-api-access-dskxw\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119125 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-csi-data-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119150 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26337077-30a6-4855-9c14-4b0bece1353e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119167 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41f10989-ef30-4194-9ff6-47f75389101c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119187 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-srv-cert\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119206 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvqqk\" (UniqueName: \"kubernetes.io/projected/60853e4e-b79e-4597-84fe-a051efbbeaff-kube-api-access-qvqqk\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119223 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119249 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/faf42878-840c-430d-b687-9a45b056b3b4-metrics-tls\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119272 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-plugins-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119300 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa47f84-001a-45ca-82ec-18b0e2917f5f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119324 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/41f10989-ef30-4194-9ff6-47f75389101c-images\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119343 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2n2c\" (UniqueName: \"kubernetes.io/projected/483479b4-efee-46d9-b4b9-c126ea3280df-kube-api-access-d2n2c\") pod \"multus-admission-controller-857f4d67dd-cc6md\" (UID: \"483479b4-efee-46d9-b4b9-c126ea3280df\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119362 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60853e4e-b79e-4597-84fe-a051efbbeaff-config-volume\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119383 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzf8w\" (UniqueName: \"kubernetes.io/projected/e4608d26-293f-4f40-b3eb-7e44f9e490e8-kube-api-access-nzf8w\") pod \"package-server-manager-789f6589d5-d2nbw\" (UID: \"e4608d26-293f-4f40-b3eb-7e44f9e490e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119405 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9505fe40-d6f4-40f5-b555-486eddeeefd5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r2h7q\" (UID: \"9505fe40-d6f4-40f5-b555-486eddeeefd5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119423 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-trusted-ca\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119442 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fda4e213-c731-46b3-8640-c41d9f61f81d-certs\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119477 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d508ac9a-385b-4485-b51f-58b92753b7e0-config\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119494 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkljp\" (UniqueName: \"kubernetes.io/projected/41f10989-ef30-4194-9ff6-47f75389101c-kube-api-access-pkljp\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119512 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-socket-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119528 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c036ec-3275-4504-b287-6edf545c77fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119547 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119565 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/41f10989-ef30-4194-9ff6-47f75389101c-proxy-tls\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119579 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4c52cc3-42e8-419b-9f75-779c3279be2d-webhook-cert\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119593 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745cr\" (UniqueName: \"kubernetes.io/projected/faf42878-840c-430d-b687-9a45b056b3b4-kube-api-access-745cr\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119610 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-metrics-tls\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119627 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77c036ec-3275-4504-b287-6edf545c77fb-config\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119642 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119678 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-metrics-certs\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119696 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkj8b\" (UniqueName: \"kubernetes.io/projected/d508ac9a-385b-4485-b51f-58b92753b7e0-kube-api-access-wkj8b\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119729 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/483479b4-efee-46d9-b4b9-c126ea3280df-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cc6md\" (UID: \"483479b4-efee-46d9-b4b9-c126ea3280df\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119747 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-kube-api-access-cdhmx\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119763 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e99c0dd-5d4f-4796-a04c-72b448a33f31-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119784 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119801 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwhb4\" (UniqueName: \"kubernetes.io/projected/d215d9bb-b11b-434c-af59-42398990f8c6-kube-api-access-dwhb4\") pod \"migrator-59844c95c7-4t4td\" (UID: \"d215d9bb-b11b-434c-af59-42398990f8c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119817 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-stats-auth\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119834 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-registration-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119857 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119886 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fda4e213-c731-46b3-8640-c41d9f61f81d-node-bootstrap-token\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119903 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-signing-cabundle\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119918 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9rbb\" (UniqueName: \"kubernetes.io/projected/0f2bc3c9-6717-4a58-a24a-275266f6b948-kube-api-access-l9rbb\") pod \"ingress-canary-bj5j7\" (UID: \"0f2bc3c9-6717-4a58-a24a-275266f6b948\") " pod="openshift-ingress-canary/ingress-canary-bj5j7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119934 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-mountpoint-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119959 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77c036ec-3275-4504-b287-6edf545c77fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119975 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f2bc3c9-6717-4a58-a24a-275266f6b948-cert\") pod \"ingress-canary-bj5j7\" (UID: \"0f2bc3c9-6717-4a58-a24a-275266f6b948\") " pod="openshift-ingress-canary/ingress-canary-bj5j7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119991 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eb8e8a-2974-4453-a266-988fe75852d6-service-ca-bundle\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.120012 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxw59\" (UniqueName: \"kubernetes.io/projected/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-kube-api-access-vxw59\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.120034 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrz6b\" (UniqueName: \"kubernetes.io/projected/fda4e213-c731-46b3-8640-c41d9f61f81d-kube-api-access-xrz6b\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.120067 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtf2g\" (UniqueName: \"kubernetes.io/projected/fedcbacb-0096-4b5f-83da-23a7af142d37-kube-api-access-mtf2g\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.120083 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aa47f84-001a-45ca-82ec-18b0e2917f5f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.120114 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60853e4e-b79e-4597-84fe-a051efbbeaff-secret-volume\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.122154 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-socket-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.119121 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c4c52cc3-42e8-419b-9f75-779c3279be2d-tmpfs\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.125276 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa47f84-001a-45ca-82ec-18b0e2917f5f-config\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.126200 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41f10989-ef30-4194-9ff6-47f75389101c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.128319 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60853e4e-b79e-4597-84fe-a051efbbeaff-config-volume\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.129227 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-trusted-ca\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.130171 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eb8e8a-2974-4453-a266-988fe75852d6-service-ca-bundle\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.134836 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.134904 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-mountpoint-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.135458 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-csi-data-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.136264 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d508ac9a-385b-4485-b51f-58b92753b7e0-config\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.137396 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4c52cc3-42e8-419b-9f75-779c3279be2d-webhook-cert\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.137996 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42afcdee-a73a-4865-adca-6a86a1dc81ee-profile-collector-cert\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.138293 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c036ec-3275-4504-b287-6edf545c77fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.144190 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faf42878-840c-430d-b687-9a45b056b3b4-config-volume\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.144643 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-plugins-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.144974 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.146764 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-stats-auth\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.146863 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fda4e213-c731-46b3-8640-c41d9f61f81d-certs\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.147291 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-metrics-certs\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.147368 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06eb8e8a-2974-4453-a266-988fe75852d6-default-certificate\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.147667 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/41f10989-ef30-4194-9ff6-47f75389101c-proxy-tls\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.147980 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fedcbacb-0096-4b5f-83da-23a7af142d37-registration-dir\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.148290 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4c52cc3-42e8-419b-9f75-779c3279be2d-apiservice-cert\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.148292 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4608d26-293f-4f40-b3eb-7e44f9e490e8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d2nbw\" (UID: \"e4608d26-293f-4f40-b3eb-7e44f9e490e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.148369 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-signing-key\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.148522 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/483479b4-efee-46d9-b4b9-c126ea3280df-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cc6md\" (UID: \"483479b4-efee-46d9-b4b9-c126ea3280df\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.148928 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9505fe40-d6f4-40f5-b555-486eddeeefd5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r2h7q\" (UID: \"9505fe40-d6f4-40f5-b555-486eddeeefd5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.148954 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f2bc3c9-6717-4a58-a24a-275266f6b948-cert\") pod \"ingress-canary-bj5j7\" (UID: \"0f2bc3c9-6717-4a58-a24a-275266f6b948\") " pod="openshift-ingress-canary/ingress-canary-bj5j7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.149164 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/faf42878-840c-430d-b687-9a45b056b3b4-metrics-tls\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.149367 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa47f84-001a-45ca-82ec-18b0e2917f5f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.149577 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-metrics-tls\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.149619 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d508ac9a-385b-4485-b51f-58b92753b7e0-serving-cert\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.150115 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77c036ec-3275-4504-b287-6edf545c77fb-config\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.150141 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-srv-cert\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.153891 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e99c0dd-5d4f-4796-a04c-72b448a33f31-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.154391 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/41f10989-ef30-4194-9ff6-47f75389101c-images\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.155381 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.655326582 +0000 UTC m=+140.625560737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.155768 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-signing-cabundle\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.160019 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" event={"ID":"262b0ea9-ac8b-4698-bea0-283f94e34240","Type":"ContainerStarted","Data":"1ec5218546b9898ba250b8026ea149a3e07e99768c91995a035dd840cf3d5488"} Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.160326 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26337077-30a6-4855-9c14-4b0bece1353e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.162573 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42afcdee-a73a-4865-adca-6a86a1dc81ee-srv-cert\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.164992 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26337077-30a6-4855-9c14-4b0bece1353e-proxy-tls\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.165436 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fda4e213-c731-46b3-8640-c41d9f61f81d-node-bootstrap-token\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.166005 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" event={"ID":"8502f12d-fa3b-441f-b96d-e33d236f8131","Type":"ContainerStarted","Data":"03488da303b76d2e1b5e980a6675596b1a10d0b2d4e61d2cafba412894d65809"} Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.166518 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e99c0dd-5d4f-4796-a04c-72b448a33f31-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.173252 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60853e4e-b79e-4597-84fe-a051efbbeaff-secret-volume\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.173679 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.173916 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" event={"ID":"32d7045a-59bd-4637-9365-be7ca63fab06","Type":"ContainerStarted","Data":"aba17c347eb645aadbc6b35b47af72362e2e257f680bac1ba49e9acf85af23c0"} Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.174341 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvg44\" (UniqueName: \"kubernetes.io/projected/06eb8e8a-2974-4453-a266-988fe75852d6-kube-api-access-vvg44\") pod \"router-default-5444994796-tsgkn\" (UID: \"06eb8e8a-2974-4453-a266-988fe75852d6\") " pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.175263 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" event={"ID":"a917d9bc-242b-4537-b454-edab3a6da7d6","Type":"ContainerStarted","Data":"83bd1f063aa69c05a8b1aeb6c5e6b23652cd6786ae04dac1e2aadaedf798428a"} Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.177400 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2n2c\" (UniqueName: \"kubernetes.io/projected/483479b4-efee-46d9-b4b9-c126ea3280df-kube-api-access-d2n2c\") pod \"multus-admission-controller-857f4d67dd-cc6md\" (UID: \"483479b4-efee-46d9-b4b9-c126ea3280df\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.218559 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9rbb\" (UniqueName: \"kubernetes.io/projected/0f2bc3c9-6717-4a58-a24a-275266f6b948-kube-api-access-l9rbb\") pod \"ingress-canary-bj5j7\" (UID: \"0f2bc3c9-6717-4a58-a24a-275266f6b948\") " pod="openshift-ingress-canary/ingress-canary-bj5j7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.218756 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.222964 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dskxw\" (UniqueName: \"kubernetes.io/projected/26337077-30a6-4855-9c14-4b0bece1353e-kube-api-access-dskxw\") pod \"machine-config-controller-84d6567774-mkwg6\" (UID: \"26337077-30a6-4855-9c14-4b0bece1353e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.228542 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.229176 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.729128949 +0000 UTC m=+140.699363104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.234244 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bj5j7" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.245028 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtf2g\" (UniqueName: \"kubernetes.io/projected/fedcbacb-0096-4b5f-83da-23a7af142d37-kube-api-access-mtf2g\") pod \"csi-hostpathplugin-gxwdj\" (UID: \"fedcbacb-0096-4b5f-83da-23a7af142d37\") " pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.260495 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.266100 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77c036ec-3275-4504-b287-6edf545c77fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-brwx4\" (UID: \"77c036ec-3275-4504-b287-6edf545c77fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:17 crc kubenswrapper[4889]: W1128 06:50:17.273719 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28ce1e35_8647_4feb_8375_c33c20284687.slice/crio-ee203f6bf491c39fa275d2c211004f20274df66c493668bd4335734d3490aa3c WatchSource:0}: Error finding container ee203f6bf491c39fa275d2c211004f20274df66c493668bd4335734d3490aa3c: Status 404 returned error can't find the container with id ee203f6bf491c39fa275d2c211004f20274df66c493668bd4335734d3490aa3c Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.290170 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpws5\" (UniqueName: \"kubernetes.io/projected/42afcdee-a73a-4865-adca-6a86a1dc81ee-kube-api-access-cpws5\") pod \"catalog-operator-68c6474976-h2c55\" (UID: \"42afcdee-a73a-4865-adca-6a86a1dc81ee\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.308616 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzf8w\" (UniqueName: \"kubernetes.io/projected/e4608d26-293f-4f40-b3eb-7e44f9e490e8-kube-api-access-nzf8w\") pod \"package-server-manager-789f6589d5-d2nbw\" (UID: \"e4608d26-293f-4f40-b3eb-7e44f9e490e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.331102 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.331439 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.831417047 +0000 UTC m=+140.801651202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.336323 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aa47f84-001a-45ca-82ec-18b0e2917f5f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qvlll\" (UID: \"7aa47f84-001a-45ca-82ec-18b0e2917f5f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.351076 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrz6b\" (UniqueName: \"kubernetes.io/projected/fda4e213-c731-46b3-8640-c41d9f61f81d-kube-api-access-xrz6b\") pod \"machine-config-server-v8mjg\" (UID: \"fda4e213-c731-46b3-8640-c41d9f61f81d\") " pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.364668 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwhb4\" (UniqueName: \"kubernetes.io/projected/d215d9bb-b11b-434c-af59-42398990f8c6-kube-api-access-dwhb4\") pod \"migrator-59844c95c7-4t4td\" (UID: \"d215d9bb-b11b-434c-af59-42398990f8c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.372460 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.402973 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.403786 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99dz\" (UniqueName: \"kubernetes.io/projected/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-kube-api-access-d99dz\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.409510 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p2nw8"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.424523 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tp6\" (UniqueName: \"kubernetes.io/projected/9505fe40-d6f4-40f5-b555-486eddeeefd5-kube-api-access-v8tp6\") pod \"control-plane-machine-set-operator-78cbb6b69f-r2h7q\" (UID: \"9505fe40-d6f4-40f5-b555-486eddeeefd5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.431065 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.431941 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.432272 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.932241198 +0000 UTC m=+140.902475353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.432589 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.433088 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:17.933080585 +0000 UTC m=+140.903314740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.444934 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.445581 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.457903 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.465131 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.465931 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-kube-api-access-cdhmx\") pod \"marketplace-operator-79b997595-2tcxh\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.467971 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea46aa5f-ef5d-4606-9e9c-48343a4bffcc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-skddf\" (UID: \"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.473401 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.486210 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkljp\" (UniqueName: \"kubernetes.io/projected/41f10989-ef30-4194-9ff6-47f75389101c-kube-api-access-pkljp\") pod \"machine-config-operator-74547568cd-6sst9\" (UID: \"41f10989-ef30-4194-9ff6-47f75389101c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.491783 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.501239 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkj8b\" (UniqueName: \"kubernetes.io/projected/d508ac9a-385b-4485-b51f-58b92753b7e0-kube-api-access-wkj8b\") pod \"service-ca-operator-777779d784-vfjdk\" (UID: \"d508ac9a-385b-4485-b51f-58b92753b7e0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.511830 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.520481 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kf79\" (UniqueName: \"kubernetes.io/projected/32ac5770-2575-4cc2-94f6-f6b5410c4b3d-kube-api-access-6kf79\") pod \"olm-operator-6b444d44fb-g6xxs\" (UID: \"32ac5770-2575-4cc2-94f6-f6b5410c4b3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.523454 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.534086 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.534488 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.034470534 +0000 UTC m=+141.004704689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.549112 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvqqk\" (UniqueName: \"kubernetes.io/projected/60853e4e-b79e-4597-84fe-a051efbbeaff-kube-api-access-qvqqk\") pod \"collect-profiles-29405205-58zz9\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.563592 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdszs\" (UniqueName: \"kubernetes.io/projected/c4c52cc3-42e8-419b-9f75-779c3279be2d-kube-api-access-qdszs\") pod \"packageserver-d55dfcdfc-t2jgg\" (UID: \"c4c52cc3-42e8-419b-9f75-779c3279be2d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.575736 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.584214 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v8mjg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.596970 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxw59\" (UniqueName: \"kubernetes.io/projected/e9ebbc7f-727d-4dd5-ae3f-46263af0da62-kube-api-access-vxw59\") pod \"service-ca-9c57cc56f-6zxb8\" (UID: \"e9ebbc7f-727d-4dd5-ae3f-46263af0da62\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.611515 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745cr\" (UniqueName: \"kubernetes.io/projected/faf42878-840c-430d-b687-9a45b056b3b4-kube-api-access-745cr\") pod \"dns-default-v55wm\" (UID: \"faf42878-840c-430d-b687-9a45b056b3b4\") " pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.623447 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e99c0dd-5d4f-4796-a04c-72b448a33f31-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x7pwf\" (UID: \"1e99c0dd-5d4f-4796-a04c-72b448a33f31\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.630626 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.638752 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.639948 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.640471 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.140454141 +0000 UTC m=+141.110688296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.663488 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lcslc"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.674222 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.695972 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.706833 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.712455 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.719432 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.722641 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmplj"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.738200 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ptzsm"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.741441 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.741785 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.241748107 +0000 UTC m=+141.211982262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.744845 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.745458 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.245441346 +0000 UTC m=+141.215675501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.749613 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.781615 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.800536 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.816169 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" Nov 28 06:50:17 crc kubenswrapper[4889]: W1128 06:50:17.821624 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod372ce92b_75d0_4fc9_b6d0_07962d7a2dfc.slice/crio-cdf370009273ca4268bad3a0750b016f94d09a5d076a33800f5139aa1df3e7f9 WatchSource:0}: Error finding container cdf370009273ca4268bad3a0750b016f94d09a5d076a33800f5139aa1df3e7f9: Status 404 returned error can't find the container with id cdf370009273ca4268bad3a0750b016f94d09a5d076a33800f5139aa1df3e7f9 Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.837825 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bj5j7"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.858306 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.858469 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.35843308 +0000 UTC m=+141.328667235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.859338 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.859761 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.359747143 +0000 UTC m=+141.329981298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.863075 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.878436 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.891466 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.959197 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9h4ng"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.960198 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:17 crc kubenswrapper[4889]: E1128 06:50:17.969249 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.469203392 +0000 UTC m=+141.439437547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.976921 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bw9t7"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.979192 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vbw2g"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.985458 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k"] Nov 28 06:50:17 crc kubenswrapper[4889]: I1128 06:50:17.992656 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mgdw9"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:17.998299 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.026755 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gxwdj"] Nov 28 06:50:18 crc kubenswrapper[4889]: W1128 06:50:18.045133 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40f4d399_8f92_4d2f_afa4_8f460aff4348.slice/crio-b4f5114e815a11048a3877daf3b33b6147019eccc587426c8f9a80cb95420d06 WatchSource:0}: Error finding container b4f5114e815a11048a3877daf3b33b6147019eccc587426c8f9a80cb95420d06: Status 404 returned error can't find the container with id b4f5114e815a11048a3877daf3b33b6147019eccc587426c8f9a80cb95420d06 Nov 28 06:50:18 crc kubenswrapper[4889]: W1128 06:50:18.045416 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26337077_30a6_4855_9c14_4b0bece1353e.slice/crio-1a965bd0fd1bb3c6714279a8946f412a42200d5895cea840423ddb4ac78552aa WatchSource:0}: Error finding container 1a965bd0fd1bb3c6714279a8946f412a42200d5895cea840423ddb4ac78552aa: Status 404 returned error can't find the container with id 1a965bd0fd1bb3c6714279a8946f412a42200d5895cea840423ddb4ac78552aa Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.071372 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.073521 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.573500475 +0000 UTC m=+141.543734620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.108022 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2tcxh"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.173444 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.174004 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.673981685 +0000 UTC m=+141.644215840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.213100 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" event={"ID":"fedcbacb-0096-4b5f-83da-23a7af142d37","Type":"ContainerStarted","Data":"c823c81db62266bafcdbd6e19bc8f0c37487903e3412391348bd2dbb9bed57d3"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.233038 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vbw2g" event={"ID":"07339f94-9b18-4cdb-9e19-5068a64c5bd7","Type":"ContainerStarted","Data":"f502690a7b9753e45d908dad58a23f5650b6c49cb969f02bd513c64a603ef363"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.240820 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" event={"ID":"42afcdee-a73a-4865-adca-6a86a1dc81ee","Type":"ContainerStarted","Data":"aea1233bec0b5881bc63f12541011091b7a90d0d186b59a962fccb51e9cddf05"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.244437 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" event={"ID":"00834fa5-849b-48d5-984e-7526dc4f71b4","Type":"ContainerStarted","Data":"706afbc56dca63ba11b4e4e4f05e1ac97fb81619f03632d4b0bb83331d054022"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.249845 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" event={"ID":"30b525af-4632-4fe9-bdd7-6ca436cedeb7","Type":"ContainerStarted","Data":"b43fcfcebcc06b786cf5df4a343f1dee340bce38438620f852c77358fedee110"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.252919 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bw9t7" event={"ID":"8789adc8-7db9-46c9-994b-b5be723cc076","Type":"ContainerStarted","Data":"d7fa166b83271f95d16971f58f6c9c5a9a3a00639c70b6754624e9a387697ed2"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.305112 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.307664 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.807639027 +0000 UTC m=+141.777873182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.310618 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cc6md"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.310728 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tsgkn" event={"ID":"06eb8e8a-2974-4453-a266-988fe75852d6","Type":"ContainerStarted","Data":"9ca09c998b0095089fb2a2e69ef5ecfe337f7a4c020170824b87c5a2b819fe4c"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.314335 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bj5j7" event={"ID":"0f2bc3c9-6717-4a58-a24a-275266f6b948","Type":"ContainerStarted","Data":"8b4e24d274677748c3a4bd8f683af5cec77c7e130ea78fe368bc00bd47fa6b52"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.362428 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" event={"ID":"8502f12d-fa3b-441f-b96d-e33d236f8131","Type":"ContainerStarted","Data":"3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.364040 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.364167 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v8mjg" event={"ID":"fda4e213-c731-46b3-8640-c41d9f61f81d","Type":"ContainerStarted","Data":"a47a5b1e12b9d478b38ed4460fd2f3f6ad6405d982f4114ca74ff58a513f8cf3"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.365969 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" event={"ID":"32d7045a-59bd-4637-9365-be7ca63fab06","Type":"ContainerStarted","Data":"42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.367063 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.372650 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9h4ng" event={"ID":"aca1ea5e-ae14-45a8-9a19-acaea4176a13","Type":"ContainerStarted","Data":"7837bb48b111dd23debb58e6bebc9e639948e9caef15144ca265fd172dd1ca68"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.380490 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" event={"ID":"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc","Type":"ContainerStarted","Data":"cdf370009273ca4268bad3a0750b016f94d09a5d076a33800f5139aa1df3e7f9"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.382592 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" event={"ID":"5c2857b5-4c19-4889-915a-1477fc6ce9c6","Type":"ContainerStarted","Data":"19372daea5864c136b5649b0a6eb87eb99d1148e69c86b37ff14e292f80e7904"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.388609 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.394824 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" event={"ID":"262b0ea9-ac8b-4698-bea0-283f94e34240","Type":"ContainerStarted","Data":"b7480f1b48f0781d5e0d04c2fac2094a27a1b1d8b7ab103731df71633ec38fee"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.394890 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" event={"ID":"262b0ea9-ac8b-4698-bea0-283f94e34240","Type":"ContainerStarted","Data":"d6f81b99517a53e48e6000028ada5b60e84391573e52741bd9eda1a8e1aa220f"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.408733 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.409242 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:18.909223862 +0000 UTC m=+141.879458007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.410990 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" event={"ID":"28ce1e35-8647-4feb-8375-c33c20284687","Type":"ContainerStarted","Data":"7a0c405a3ef0711bd7d5a0476bee515ad086993ed7176afc77a69d0175c97bed"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.411075 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" event={"ID":"28ce1e35-8647-4feb-8375-c33c20284687","Type":"ContainerStarted","Data":"ee203f6bf491c39fa275d2c211004f20274df66c493668bd4335734d3490aa3c"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.422855 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" event={"ID":"0ae59079-e71d-4cb5-960d-6bafe6f27d81","Type":"ContainerStarted","Data":"6a4eacf9105a18730ec9e4760e0fb23ece008c6b638674c588951fc50fd02eb2"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.426894 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" event={"ID":"f7491558-b178-467b-9a43-f41ef8f00f9e","Type":"ContainerStarted","Data":"e713f905d35dbd4ac5d9eb94848b56f7c0d2b0b1ef6bd464c2ffdd6092a3b24b"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.426920 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" event={"ID":"f7491558-b178-467b-9a43-f41ef8f00f9e","Type":"ContainerStarted","Data":"2d0d981e7f7281b2c0a2dc6eaf41683b1a49e977f13a09f8154270a657d893cd"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.443288 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" event={"ID":"a917d9bc-242b-4537-b454-edab3a6da7d6","Type":"ContainerStarted","Data":"f9e3ba6f1f7d2bd3a5a44c8c26c6fb82fcfa3dede32996106cb92e9cf78f2836"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.443380 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" event={"ID":"a917d9bc-242b-4537-b454-edab3a6da7d6","Type":"ContainerStarted","Data":"a671032e280ec0ec94db6ca3cf520919a6d7eb9cf0c854a72bc184a6365230b3"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.449369 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" event={"ID":"7aa47f84-001a-45ca-82ec-18b0e2917f5f","Type":"ContainerStarted","Data":"e80d3f459e999b9b22fc70d235abcc85e4adbc0a72fe77f31b5ad8c39a967e9d"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.452999 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" event={"ID":"7b21e995-d113-4b15-b59e-1ba217a862bc","Type":"ContainerStarted","Data":"d2d9824cdaa2253fe57d433d1780aeaa0680146279e7755221ae770520e9b47a"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.457969 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.504637 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.510918 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.518435 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.018415533 +0000 UTC m=+141.988649888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.521373 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" event={"ID":"26337077-30a6-4855-9c14-4b0bece1353e","Type":"ContainerStarted","Data":"1a965bd0fd1bb3c6714279a8946f412a42200d5895cea840423ddb4ac78552aa"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.561497 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" event={"ID":"40f4d399-8f92-4d2f-afa4-8f460aff4348","Type":"ContainerStarted","Data":"b4f5114e815a11048a3877daf3b33b6147019eccc587426c8f9a80cb95420d06"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.612771 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.614599 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.114564723 +0000 UTC m=+142.084799018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.637891 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" event={"ID":"11e5ff8e-7175-4c44-a641-e01582ee0e38","Type":"ContainerStarted","Data":"ceadb98e3c5a9591ae91ea250dce345533b77243e6db49d0a0086d754e42fa37"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.649138 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.653436 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" event={"ID":"0921de10-3f1e-4264-b771-90c1b1e1ddbc","Type":"ContainerStarted","Data":"32194cd7a0d3883b54b41ef8def1d17dec45ec456178ba33813b9691029f8b37"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.658726 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" event={"ID":"84fe2977-46b7-4f86-91ed-6e03bd0a43f6","Type":"ContainerStarted","Data":"26e6b4e978626d4d29b4813dde08937ed2090f5cd62424a4b7144cf5de57d497"} Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.659783 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.714882 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.719105 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.219090353 +0000 UTC m=+142.189324508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.719400 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.823283 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.823538 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.32351938 +0000 UTC m=+142.293753535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.823646 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.824068 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.324061298 +0000 UTC m=+142.294295453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: W1128 06:50:18.892109 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd215d9bb_b11b_434c_af59_42398990f8c6.slice/crio-534256bcef2bc4edd98c02b9c6f947682e37758251610df7a116b8b89ec9bbc7 WatchSource:0}: Error finding container 534256bcef2bc4edd98c02b9c6f947682e37758251610df7a116b8b89ec9bbc7: Status 404 returned error can't find the container with id 534256bcef2bc4edd98c02b9c6f947682e37758251610df7a116b8b89ec9bbc7 Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.925106 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:18 crc kubenswrapper[4889]: E1128 06:50:18.926229 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.426200311 +0000 UTC m=+142.396434466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.984326 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" podStartSLOduration=121.984308681 podStartE2EDuration="2m1.984308681s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:18.951099147 +0000 UTC m=+141.921333322" watchObservedRunningTime="2025-11-28 06:50:18.984308681 +0000 UTC m=+141.954542836" Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.986633 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9"] Nov 28 06:50:18 crc kubenswrapper[4889]: I1128 06:50:18.987659 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" podStartSLOduration=121.987651019 podStartE2EDuration="2m1.987651019s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:18.980919571 +0000 UTC m=+141.951153726" watchObservedRunningTime="2025-11-28 06:50:18.987651019 +0000 UTC m=+141.957885164" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.020404 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hn9w9" podStartSLOduration=122.020370567 podStartE2EDuration="2m2.020370567s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:19.008268846 +0000 UTC m=+141.978503001" watchObservedRunningTime="2025-11-28 06:50:19.020370567 +0000 UTC m=+141.990604722" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.029899 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.030988 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.53097067 +0000 UTC m=+142.501204825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.043126 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-p2nw8" podStartSLOduration=122.043103303 podStartE2EDuration="2m2.043103303s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:19.041212102 +0000 UTC m=+142.011446257" watchObservedRunningTime="2025-11-28 06:50:19.043103303 +0000 UTC m=+142.013337458" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.076025 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fx7n" podStartSLOduration=122.075996296 podStartE2EDuration="2m2.075996296s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:19.073727843 +0000 UTC m=+142.043961998" watchObservedRunningTime="2025-11-28 06:50:19.075996296 +0000 UTC m=+142.046230451" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.135536 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.135983 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.635961236 +0000 UTC m=+142.606195391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.236862 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.237526 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.737511889 +0000 UTC m=+142.707746044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.342106 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.342632 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.842613268 +0000 UTC m=+142.812847423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.443624 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.444145 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:19.944128841 +0000 UTC m=+142.914362996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.544676 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.545805 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.045780068 +0000 UTC m=+143.016014223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.658753 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.659107 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.159093063 +0000 UTC m=+143.129327218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.660407 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-skddf"] Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.669524 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" event={"ID":"7b21e995-d113-4b15-b59e-1ba217a862bc","Type":"ContainerStarted","Data":"80f3b59fb49b465ec8fcda8817272495ee9827d2c578900ab298c854de95dd76"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.674667 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" event={"ID":"1e99c0dd-5d4f-4796-a04c-72b448a33f31","Type":"ContainerStarted","Data":"02a708d4c16eba8f96bddb3a504f69135558c38b60ddc5ffb199eb4284d49f89"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.685586 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw"] Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.687085 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" event={"ID":"56d26fb0-3c51-4131-ab05-3e0e407bd9dd","Type":"ContainerStarted","Data":"4acecc7884ca452a7b93e8d32f72bb01ab7e13c43b74c048ce5bf3cc94f98d20"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.687453 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9"] Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.689138 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" event={"ID":"372ce92b-75d0-4fc9-b6d0-07962d7a2dfc","Type":"ContainerStarted","Data":"db9032b482e321f901ad5f1db7c5c3e66437ef4a31e6514826e00b14dbef78c5"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.699216 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" event={"ID":"d508ac9a-385b-4485-b51f-58b92753b7e0","Type":"ContainerStarted","Data":"8eb93e229dbdc3b9182857094dd492eccee1c45fd90f8431cca11ca8bf62ca55"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.700778 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bw9t7" event={"ID":"8789adc8-7db9-46c9-994b-b5be723cc076","Type":"ContainerStarted","Data":"a50719f4d1c407c4c76492e7979fcdc17af2e9c689a22bdfd70404ad62b4dd9c"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.701458 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bw9t7" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.703057 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-v55wm"] Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.705553 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" event={"ID":"483479b4-efee-46d9-b4b9-c126ea3280df","Type":"ContainerStarted","Data":"19d79945ac9f10b5f9d0ba160557b515273c03c3d970ca6eaf8ffbf4219b0bc7"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.710065 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" event={"ID":"77c036ec-3275-4504-b287-6edf545c77fb","Type":"ContainerStarted","Data":"1b69b3378517d1b4700e680921cc74dd687078285c7ed88713b2823181c8fe79"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.711434 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" event={"ID":"00834fa5-849b-48d5-984e-7526dc4f71b4","Type":"ContainerStarted","Data":"3767b58ffcf36aef05ca3f1f7590ac8f52bdb43ced09a6366dfdd7c427e42070"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.712617 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-bw9t7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.721198 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bw9t7" podUID="8789adc8-7db9-46c9-994b-b5be723cc076" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.720331 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwbzc" podStartSLOduration=122.720307303 podStartE2EDuration="2m2.720307303s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:19.715477157 +0000 UTC m=+142.685711312" watchObservedRunningTime="2025-11-28 06:50:19.720307303 +0000 UTC m=+142.690541458" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.716751 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tsgkn" event={"ID":"06eb8e8a-2974-4453-a266-988fe75852d6","Type":"ContainerStarted","Data":"e38f41310b5094f21317bfd4522ae74483c193b695f8b63d64bfa30dd32b520c"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.731735 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" event={"ID":"60853e4e-b79e-4597-84fe-a051efbbeaff","Type":"ContainerStarted","Data":"e9d95f99ac3b70d08ca1c55090d165d78776b498697e7fc02f662b91f7b2bfee"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.751517 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bw9t7" podStartSLOduration=122.751496352 podStartE2EDuration="2m2.751496352s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:19.748543066 +0000 UTC m=+142.718777221" watchObservedRunningTime="2025-11-28 06:50:19.751496352 +0000 UTC m=+142.721730507" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.764582 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.764956 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.264911985 +0000 UTC m=+143.235146140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.765310 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.765749 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.265740982 +0000 UTC m=+143.235975137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.771298 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" event={"ID":"d215d9bb-b11b-434c-af59-42398990f8c6","Type":"ContainerStarted","Data":"534256bcef2bc4edd98c02b9c6f947682e37758251610df7a116b8b89ec9bbc7"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.784398 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs"] Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.788079 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" event={"ID":"0921de10-3f1e-4264-b771-90c1b1e1ddbc","Type":"ContainerStarted","Data":"fb2c4159c60a9c4450ab338c962febc6e427da7fa281048e53621633e778f4c4"} Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.816292 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tsgkn" podStartSLOduration=122.816264836 podStartE2EDuration="2m2.816264836s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:19.81297241 +0000 UTC m=+142.783206565" watchObservedRunningTime="2025-11-28 06:50:19.816264836 +0000 UTC m=+142.786498981" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.838362 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6zxb8"] Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.840852 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ljn4s" podStartSLOduration=122.840829151 podStartE2EDuration="2m2.840829151s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:19.839345253 +0000 UTC m=+142.809579408" watchObservedRunningTime="2025-11-28 06:50:19.840829151 +0000 UTC m=+142.811063306" Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.867193 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.867679 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.367641458 +0000 UTC m=+143.337875613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.890028 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q"] Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.898540 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg"] Nov 28 06:50:19 crc kubenswrapper[4889]: W1128 06:50:19.899290 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ac5770_2575_4cc2_94f6_f6b5410c4b3d.slice/crio-f63684445317b69d0a40e7f17e80f25c916d9b814ee14ee686aed1455add4de4 WatchSource:0}: Error finding container f63684445317b69d0a40e7f17e80f25c916d9b814ee14ee686aed1455add4de4: Status 404 returned error can't find the container with id f63684445317b69d0a40e7f17e80f25c916d9b814ee14ee686aed1455add4de4 Nov 28 06:50:19 crc kubenswrapper[4889]: I1128 06:50:19.975527 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:19 crc kubenswrapper[4889]: E1128 06:50:19.978056 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.478035778 +0000 UTC m=+143.448269933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: W1128 06:50:20.005801 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4c52cc3_42e8_419b_9f75_779c3279be2d.slice/crio-88d9ee6ca8a6b40211f92f7182007a9e236738131eb453b3d39316e4f28e63d6 WatchSource:0}: Error finding container 88d9ee6ca8a6b40211f92f7182007a9e236738131eb453b3d39316e4f28e63d6: Status 404 returned error can't find the container with id 88d9ee6ca8a6b40211f92f7182007a9e236738131eb453b3d39316e4f28e63d6 Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.077367 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.078137 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.578100494 +0000 UTC m=+143.548334649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: W1128 06:50:20.116596 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9505fe40_d6f4_40f5_b555_486eddeeefd5.slice/crio-0c2da489528362e3a3e2cd106b731134245482fac583443a20d9bb56e5f17a64 WatchSource:0}: Error finding container 0c2da489528362e3a3e2cd106b731134245482fac583443a20d9bb56e5f17a64: Status 404 returned error can't find the container with id 0c2da489528362e3a3e2cd106b731134245482fac583443a20d9bb56e5f17a64 Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.179961 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.180413 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.680398621 +0000 UTC m=+143.650632776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.282316 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.282781 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.782761942 +0000 UTC m=+143.752996097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.384986 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.385982 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.885967809 +0000 UTC m=+143.856201964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.432795 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.442888 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:20 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:20 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:20 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.442959 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.486361 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.486839 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:20.98679614 +0000 UTC m=+143.957030435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.588180 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.588809 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.088781408 +0000 UTC m=+144.059015653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.689435 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.689673 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.18963866 +0000 UTC m=+144.159872805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.690282 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.690665 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.190646222 +0000 UTC m=+144.160880387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.791316 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.791796 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.291773013 +0000 UTC m=+144.262007168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.798810 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" event={"ID":"40f4d399-8f92-4d2f-afa4-8f460aff4348","Type":"ContainerStarted","Data":"88691fda2084d5406bf1eb28f5f09c999911a8110f6141d5587cc52fd65c1dee"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.799140 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.801381 4889 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mgdw9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.801423 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" podUID="40f4d399-8f92-4d2f-afa4-8f460aff4348" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.806343 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" event={"ID":"11e5ff8e-7175-4c44-a641-e01582ee0e38","Type":"ContainerStarted","Data":"2f4d0594baca8d6c7badad2b85e4b469852d6656db045fe65e5402160127deaa"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.807691 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" event={"ID":"5c2857b5-4c19-4889-915a-1477fc6ce9c6","Type":"ContainerStarted","Data":"9a2dae4631da103d47b8585cb766dbe7996700dd171f410c856b4452f24c1d95"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.811417 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" event={"ID":"d508ac9a-385b-4485-b51f-58b92753b7e0","Type":"ContainerStarted","Data":"ce084c434739caff31dd4db61f9f3f43c12617ee4cfd44ac73e890cc0cbdddb7"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.816610 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" event={"ID":"42afcdee-a73a-4865-adca-6a86a1dc81ee","Type":"ContainerStarted","Data":"17aa103c48bf309f7898a8abc0e33b59159c3194ad18c17f738f691377ab56ff"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.816921 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.821212 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v8mjg" event={"ID":"fda4e213-c731-46b3-8640-c41d9f61f81d","Type":"ContainerStarted","Data":"2c2b95822bb099c09b74b22326ababf1856bac4204162333b1b429cfb1e14085"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.859235 4889 generic.go:334] "Generic (PLEG): container finished" podID="30b525af-4632-4fe9-bdd7-6ca436cedeb7" containerID="ef15b726dd403b283c62fc74c17eee7f6a38c1e05dfcc12ffcd6e6bf58c02613" exitCode=0 Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.859580 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" event={"ID":"30b525af-4632-4fe9-bdd7-6ca436cedeb7","Type":"ContainerDied","Data":"ef15b726dd403b283c62fc74c17eee7f6a38c1e05dfcc12ffcd6e6bf58c02613"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.860517 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.866645 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" podStartSLOduration=123.866626624 podStartE2EDuration="2m3.866626624s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:20.865516738 +0000 UTC m=+143.835750893" watchObservedRunningTime="2025-11-28 06:50:20.866626624 +0000 UTC m=+143.836860779" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.891964 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" event={"ID":"84fe2977-46b7-4f86-91ed-6e03bd0a43f6","Type":"ContainerStarted","Data":"9be8e5ca6c9c96d99454d75879104da392f8d538bedd85b022e7f5bc74cdbc75"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.892530 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.892906 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.392891603 +0000 UTC m=+144.363125758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.909688 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lcslc" podStartSLOduration=123.909658445 podStartE2EDuration="2m3.909658445s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:20.905418968 +0000 UTC m=+143.875653123" watchObservedRunningTime="2025-11-28 06:50:20.909658445 +0000 UTC m=+143.879892600" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.935567 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" event={"ID":"c4c52cc3-42e8-419b-9f75-779c3279be2d","Type":"ContainerStarted","Data":"88d9ee6ca8a6b40211f92f7182007a9e236738131eb453b3d39316e4f28e63d6"} Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.963962 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-v8mjg" podStartSLOduration=7.963942751 podStartE2EDuration="7.963942751s" podCreationTimestamp="2025-11-28 06:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:20.961887735 +0000 UTC m=+143.932121900" watchObservedRunningTime="2025-11-28 06:50:20.963942751 +0000 UTC m=+143.934176916" Nov 28 06:50:20 crc kubenswrapper[4889]: I1128 06:50:20.996470 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:20 crc kubenswrapper[4889]: E1128 06:50:20.997902 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.497880769 +0000 UTC m=+144.468114924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.003308 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h2c55" podStartSLOduration=124.003282033 podStartE2EDuration="2m4.003282033s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.001864127 +0000 UTC m=+143.972098282" watchObservedRunningTime="2025-11-28 06:50:21.003282033 +0000 UTC m=+143.973516188" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.047113 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" event={"ID":"28ce1e35-8647-4feb-8375-c33c20284687","Type":"ContainerStarted","Data":"7e9467f7bdf0d8d8b3e88fd713d4b3367d0fedf76c1a3a7dcb16412f88d1cd09"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.086098 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" event={"ID":"56d26fb0-3c51-4131-ab05-3e0e407bd9dd","Type":"ContainerStarted","Data":"1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.086496 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vfjdk" podStartSLOduration=124.086484164 podStartE2EDuration="2m4.086484164s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.046901754 +0000 UTC m=+144.017135909" watchObservedRunningTime="2025-11-28 06:50:21.086484164 +0000 UTC m=+144.056718389" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.101746 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.102096 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.602084859 +0000 UTC m=+144.572319014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.138745 4889 generic.go:334] "Generic (PLEG): container finished" podID="00834fa5-849b-48d5-984e-7526dc4f71b4" containerID="3767b58ffcf36aef05ca3f1f7590ac8f52bdb43ced09a6366dfdd7c427e42070" exitCode=0 Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.138837 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" event={"ID":"00834fa5-849b-48d5-984e-7526dc4f71b4","Type":"ContainerDied","Data":"3767b58ffcf36aef05ca3f1f7590ac8f52bdb43ced09a6366dfdd7c427e42070"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.189235 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" event={"ID":"77c036ec-3275-4504-b287-6edf545c77fb","Type":"ContainerStarted","Data":"54ca40127b8aee5550f351b3f9559850dcd95e25af688388c93808b817dbd575"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.195576 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" podStartSLOduration=124.195554281 podStartE2EDuration="2m4.195554281s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.158167392 +0000 UTC m=+144.128401547" watchObservedRunningTime="2025-11-28 06:50:21.195554281 +0000 UTC m=+144.165788436" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.196357 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jhr6k" podStartSLOduration=124.196352417 podStartE2EDuration="2m4.196352417s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.195644024 +0000 UTC m=+144.165878179" watchObservedRunningTime="2025-11-28 06:50:21.196352417 +0000 UTC m=+144.166586572" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.202817 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.204241 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.704207781 +0000 UTC m=+144.674442066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.227314 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" event={"ID":"e4608d26-293f-4f40-b3eb-7e44f9e490e8","Type":"ContainerStarted","Data":"82f097aaccf72e674715a3b4cf01d69d392e732faf1ba5a15310cad2259d276c"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.234896 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vbw2g" event={"ID":"07339f94-9b18-4cdb-9e19-5068a64c5bd7","Type":"ContainerStarted","Data":"c11dc042124324e4180f77721877a870408fd2cbd5dce6890d1c45617b534892"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.236020 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.262961 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xmk5" podStartSLOduration=125.26292662 podStartE2EDuration="2m5.26292662s" podCreationTimestamp="2025-11-28 06:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.236244117 +0000 UTC m=+144.206478272" watchObservedRunningTime="2025-11-28 06:50:21.26292662 +0000 UTC m=+144.233160775" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.286222 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v55wm" event={"ID":"faf42878-840c-430d-b687-9a45b056b3b4","Type":"ContainerStarted","Data":"310cccb7f9e9396221a7479237f5411b41f38eb88e197cb6880b59fbeeb03e18"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.304759 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.307105 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.807084158 +0000 UTC m=+144.777318313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.312397 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vbw2g" podStartSLOduration=124.312376689 podStartE2EDuration="2m4.312376689s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.265316927 +0000 UTC m=+144.235551082" watchObservedRunningTime="2025-11-28 06:50:21.312376689 +0000 UTC m=+144.282610844" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.345037 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" event={"ID":"0ae59079-e71d-4cb5-960d-6bafe6f27d81","Type":"ContainerStarted","Data":"d019c65ed1eb0e8dad0ce5dff3587cdc011975a015a162e13815f8916f1d889c"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.371397 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" event={"ID":"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc","Type":"ContainerStarted","Data":"998ff51f532298e42ba51fa39b7864cfce55847886ed734cdb5a6c5dac067938"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.399376 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" event={"ID":"32ac5770-2575-4cc2-94f6-f6b5410c4b3d","Type":"ContainerStarted","Data":"f63684445317b69d0a40e7f17e80f25c916d9b814ee14ee686aed1455add4de4"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.401105 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.405365 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.406759 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:21.906732181 +0000 UTC m=+144.876966496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.406822 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fqbsw" podStartSLOduration=124.406799153 podStartE2EDuration="2m4.406799153s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.390398883 +0000 UTC m=+144.360633048" watchObservedRunningTime="2025-11-28 06:50:21.406799153 +0000 UTC m=+144.377033308" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.407055 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brwx4" podStartSLOduration=124.407049611 podStartE2EDuration="2m4.407049611s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.355002648 +0000 UTC m=+144.325236803" watchObservedRunningTime="2025-11-28 06:50:21.407049611 +0000 UTC m=+144.377283766" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.411661 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" event={"ID":"e9ebbc7f-727d-4dd5-ae3f-46263af0da62","Type":"ContainerStarted","Data":"638109196e865b9670ebdb32613ae18cad98dd8cf77d7e93d830cd5e6017fe84"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.424304 4889 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g6xxs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.424378 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" podUID="32ac5770-2575-4cc2-94f6-f6b5410c4b3d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.427695 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" event={"ID":"7aa47f84-001a-45ca-82ec-18b0e2917f5f","Type":"ContainerStarted","Data":"3a2ca8f3538f734d322defe66173fcafcb5832ffd958a57715c917684f3aff05"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.431189 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" podStartSLOduration=124.431171551 podStartE2EDuration="2m4.431171551s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.429480097 +0000 UTC m=+144.399714242" watchObservedRunningTime="2025-11-28 06:50:21.431171551 +0000 UTC m=+144.401405706" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.449316 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:21 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:21 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:21 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.449367 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.472227 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" event={"ID":"41f10989-ef30-4194-9ff6-47f75389101c","Type":"ContainerStarted","Data":"af8b932145cdc1f0268bcba70a3a72d015aea14ed0821723ccf46c9251493fef"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.473149 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qvlll" podStartSLOduration=124.473138359 podStartE2EDuration="2m4.473138359s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.47257303 +0000 UTC m=+144.442807185" watchObservedRunningTime="2025-11-28 06:50:21.473138359 +0000 UTC m=+144.443372514" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.510182 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.511015 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9h4ng" event={"ID":"aca1ea5e-ae14-45a8-9a19-acaea4176a13","Type":"ContainerStarted","Data":"087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c"} Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.523213 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.023186857 +0000 UTC m=+144.993421182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.567649 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" podStartSLOduration=124.567612154 podStartE2EDuration="2m4.567612154s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.50873837 +0000 UTC m=+144.478972525" watchObservedRunningTime="2025-11-28 06:50:21.567612154 +0000 UTC m=+144.537846309" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.577761 4889 generic.go:334] "Generic (PLEG): container finished" podID="7b21e995-d113-4b15-b59e-1ba217a862bc" containerID="80f3b59fb49b465ec8fcda8817272495ee9827d2c578900ab298c854de95dd76" exitCode=0 Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.578448 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" event={"ID":"7b21e995-d113-4b15-b59e-1ba217a862bc","Type":"ContainerDied","Data":"80f3b59fb49b465ec8fcda8817272495ee9827d2c578900ab298c854de95dd76"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.612580 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.613968 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.113943892 +0000 UTC m=+145.084178047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.614028 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.615786 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.115778181 +0000 UTC m=+145.086012326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.645760 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9h4ng" podStartSLOduration=124.645698829 podStartE2EDuration="2m4.645698829s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.566392514 +0000 UTC m=+144.536626669" watchObservedRunningTime="2025-11-28 06:50:21.645698829 +0000 UTC m=+144.615932984" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.663577 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" event={"ID":"483479b4-efee-46d9-b4b9-c126ea3280df","Type":"ContainerStarted","Data":"b136bbb439248441367827897b0128edbf1f6c7a591e09bcfc7bc42d71b2eef2"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.702814 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" event={"ID":"26337077-30a6-4855-9c14-4b0bece1353e","Type":"ContainerStarted","Data":"d32cdcb5c4cc2c80427fc863935168ba8515bc9461058cbdda7b1b8721c8a440"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.719912 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.721237 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.221218021 +0000 UTC m=+145.191452176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.722668 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" event={"ID":"9505fe40-d6f4-40f5-b555-486eddeeefd5","Type":"ContainerStarted","Data":"0c2da489528362e3a3e2cd106b731134245482fac583443a20d9bb56e5f17a64"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.745478 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" podStartSLOduration=124.745459525 podStartE2EDuration="2m4.745459525s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.743256544 +0000 UTC m=+144.713490699" watchObservedRunningTime="2025-11-28 06:50:21.745459525 +0000 UTC m=+144.715693680" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.788461 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-bw9t7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.789048 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bw9t7" podUID="8789adc8-7db9-46c9-994b-b5be723cc076" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.789535 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bj5j7" event={"ID":"0f2bc3c9-6717-4a58-a24a-275266f6b948","Type":"ContainerStarted","Data":"258bda650aa99b736da2a739a7bb8339d67c141176daa0cdbbb0273221658fe1"} Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.823645 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.825275 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.325255936 +0000 UTC m=+145.295490091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.888329 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" podStartSLOduration=124.888309805 podStartE2EDuration="2m4.888309805s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.837232023 +0000 UTC m=+144.807466178" watchObservedRunningTime="2025-11-28 06:50:21.888309805 +0000 UTC m=+144.858543970" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.889065 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bj5j7" podStartSLOduration=8.889060689 podStartE2EDuration="8.889060689s" podCreationTimestamp="2025-11-28 06:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:21.887169658 +0000 UTC m=+144.857403813" watchObservedRunningTime="2025-11-28 06:50:21.889060689 +0000 UTC m=+144.859294844" Nov 28 06:50:21 crc kubenswrapper[4889]: I1128 06:50:21.925271 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:21 crc kubenswrapper[4889]: E1128 06:50:21.926662 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.426640945 +0000 UTC m=+145.396875100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.029632 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.043162 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.543140532 +0000 UTC m=+145.513374687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.068581 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vhttl"] Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.070169 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.091246 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhttl"] Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.109276 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.141061 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.141412 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.641391429 +0000 UTC m=+145.611625584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.238875 4889 patch_prober.go:28] interesting pod/console-operator-58897d9998-vbw2g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.238974 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vbw2g" podUID="07339f94-9b18-4cdb-9e19-5068a64c5bd7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.243559 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-utilities\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.243605 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.243641 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-catalog-content\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.243689 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4kss\" (UniqueName: \"kubernetes.io/projected/e1c17912-a129-45b4-b833-04493886c507-kube-api-access-l4kss\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.244048 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.744035929 +0000 UTC m=+145.714270084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.258918 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxh5k"] Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.259947 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.266667 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.276752 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxh5k"] Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.344257 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.344473 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.844420175 +0000 UTC m=+145.814654330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.344574 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhw2l\" (UniqueName: \"kubernetes.io/projected/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-kube-api-access-fhw2l\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.344617 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-utilities\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.344642 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.344659 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-catalog-content\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.344683 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-catalog-content\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.345512 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-utilities\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.345829 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.84581243 +0000 UTC m=+145.816046585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.346171 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4kss\" (UniqueName: \"kubernetes.io/projected/e1c17912-a129-45b4-b833-04493886c507-kube-api-access-l4kss\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.346316 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-catalog-content\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.346366 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-utilities\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.388124 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4kss\" (UniqueName: \"kubernetes.io/projected/e1c17912-a129-45b4-b833-04493886c507-kube-api-access-l4kss\") pod \"certified-operators-vhttl\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.405953 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.442599 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:22 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:22 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:22 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.442996 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.449397 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.449653 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-utilities\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.449739 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhw2l\" (UniqueName: \"kubernetes.io/projected/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-kube-api-access-fhw2l\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.449795 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-catalog-content\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.450330 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-catalog-content\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.450437 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:22.950403443 +0000 UTC m=+145.920637598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.450695 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-utilities\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.467803 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8gg9"] Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.469301 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.487591 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhw2l\" (UniqueName: \"kubernetes.io/projected/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-kube-api-access-fhw2l\") pod \"community-operators-rxh5k\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.495637 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8gg9"] Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.550887 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-utilities\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.550940 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-catalog-content\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.550959 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhf9x\" (UniqueName: \"kubernetes.io/projected/bbed2470-346a-4721-ab26-d098ee16a4af-kube-api-access-fhf9x\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.551034 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.551337 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.051322186 +0000 UTC m=+146.021556541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.614939 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.659514 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.659648 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.159617179 +0000 UTC m=+146.129851334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.659980 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-utilities\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.660410 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-utilities\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.660463 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-catalog-content\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.660485 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhf9x\" (UniqueName: \"kubernetes.io/projected/bbed2470-346a-4721-ab26-d098ee16a4af-kube-api-access-fhf9x\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.660726 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-catalog-content\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.660837 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.661181 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.161159068 +0000 UTC m=+146.131393223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.662205 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xgjw2"] Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.663240 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.672078 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xgjw2"] Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.691903 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhf9x\" (UniqueName: \"kubernetes.io/projected/bbed2470-346a-4721-ab26-d098ee16a4af-kube-api-access-fhf9x\") pod \"certified-operators-t8gg9\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.774728 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.775094 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-utilities\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.775266 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxvlg\" (UniqueName: \"kubernetes.io/projected/21ab5f29-c8c6-4073-a891-e4ecf8b34189-kube-api-access-vxvlg\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.775602 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-catalog-content\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.775824 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.275790446 +0000 UTC m=+146.246024601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.807040 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.839907 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" event={"ID":"41f10989-ef30-4194-9ff6-47f75389101c","Type":"ContainerStarted","Data":"098cd2a44844be77ee3f1c9a8c04a3b464c9e4ce13bd8eb82370f9614985c865"} Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.840265 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" event={"ID":"41f10989-ef30-4194-9ff6-47f75389101c","Type":"ContainerStarted","Data":"e408466d77bb03becd3741e62d206fb8dd23c369e80ae932012cef321dc2f54e"} Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.880849 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.880886 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-catalog-content\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.880907 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-utilities\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.881235 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxvlg\" (UniqueName: \"kubernetes.io/projected/21ab5f29-c8c6-4073-a891-e4ecf8b34189-kube-api-access-vxvlg\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.881314 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.381293997 +0000 UTC m=+146.351528152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.881933 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-catalog-content\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.893749 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-utilities\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.898460 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" event={"ID":"60853e4e-b79e-4597-84fe-a051efbbeaff","Type":"ContainerStarted","Data":"45871325a83f347df1090dde68b84a17c23efee03ca683cdd746860926289fc1"} Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.929575 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxvlg\" (UniqueName: \"kubernetes.io/projected/21ab5f29-c8c6-4073-a891-e4ecf8b34189-kube-api-access-vxvlg\") pod \"community-operators-xgjw2\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.944498 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6zxb8" event={"ID":"e9ebbc7f-727d-4dd5-ae3f-46263af0da62","Type":"ContainerStarted","Data":"fde482baf31b253016a75d51aa853da8cbb61346faf437b4d520af1adab1978c"} Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.944911 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6sst9" podStartSLOduration=125.944887924 podStartE2EDuration="2m5.944887924s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:22.880368298 +0000 UTC m=+145.850602453" watchObservedRunningTime="2025-11-28 06:50:22.944887924 +0000 UTC m=+145.915122079" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.945748 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" podStartSLOduration=125.945740742 podStartE2EDuration="2m5.945740742s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:22.945522335 +0000 UTC m=+145.915756500" watchObservedRunningTime="2025-11-28 06:50:22.945740742 +0000 UTC m=+145.915974897" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.966791 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" event={"ID":"d215d9bb-b11b-434c-af59-42398990f8c6","Type":"ContainerStarted","Data":"e7662c220ef09efdf3665943336534b2b3370ea2adc6885b37adea80cb90e300"} Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.966836 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" event={"ID":"d215d9bb-b11b-434c-af59-42398990f8c6","Type":"ContainerStarted","Data":"66d586792e454caf4d1223ae6c7708057a9a42620ad71e957f85587c4e817cfb"} Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.980788 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" event={"ID":"c4c52cc3-42e8-419b-9f75-779c3279be2d","Type":"ContainerStarted","Data":"abf452e13ebca5b49e245d89abed6c710f8573fca08c5b50cc86b111f71e61e1"} Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.981643 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.982022 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:22 crc kubenswrapper[4889]: E1128 06:50:22.982663 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.482645405 +0000 UTC m=+146.452879560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:22 crc kubenswrapper[4889]: I1128 06:50:22.984563 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" event={"ID":"1e99c0dd-5d4f-4796-a04c-72b448a33f31","Type":"ContainerStarted","Data":"ed722292847b39c099223ff67d4498d4e170ed9bcd742b73f87f4c138c10003b"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.034932 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" podStartSLOduration=126.034911235 podStartE2EDuration="2m6.034911235s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.033224601 +0000 UTC m=+146.003458756" watchObservedRunningTime="2025-11-28 06:50:23.034911235 +0000 UTC m=+146.005145390" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.035990 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.036667 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4t4td" podStartSLOduration=126.036660632 podStartE2EDuration="2m6.036660632s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.0016738 +0000 UTC m=+145.971907965" watchObservedRunningTime="2025-11-28 06:50:23.036660632 +0000 UTC m=+146.006894787" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.036919 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" event={"ID":"e4608d26-293f-4f40-b3eb-7e44f9e490e8","Type":"ContainerStarted","Data":"6eae28fc4599b70729d43091cdc12d4d7ec78584f605b0a59205a71abcc7223d"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.038827 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.038847 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" event={"ID":"e4608d26-293f-4f40-b3eb-7e44f9e490e8","Type":"ContainerStarted","Data":"eb8ef3244d6434faefa48f46f81cb0435b855335324507655d903dcd7bd91679"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.069555 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x7pwf" podStartSLOduration=126.069533335 podStartE2EDuration="2m6.069533335s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.067087846 +0000 UTC m=+146.037322001" watchObservedRunningTime="2025-11-28 06:50:23.069533335 +0000 UTC m=+146.039767490" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.084630 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:23 crc kubenswrapper[4889]: E1128 06:50:23.087157 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.587120514 +0000 UTC m=+146.557354669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.101722 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" event={"ID":"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc","Type":"ContainerStarted","Data":"90786c0061d9521ccbc1e3a91228aa00b57110026ef92c572f425722f9dc8382"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.101770 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" event={"ID":"ea46aa5f-ef5d-4606-9e9c-48343a4bffcc","Type":"ContainerStarted","Data":"db815190fcd75be852fc682375768ddd575c5dd89b2ec978513f7d21d76d80f0"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.119618 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" event={"ID":"32ac5770-2575-4cc2-94f6-f6b5410c4b3d","Type":"ContainerStarted","Data":"db208a972d4e6a1020d6d43fc54f20957aa82cbc01d9e29427ab916f6337df73"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.136402 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" event={"ID":"11e5ff8e-7175-4c44-a641-e01582ee0e38","Type":"ContainerStarted","Data":"593dd9e8a2346854da3c11ed124c2458fc2ba100ed719c9a572d71b8c5bc2bdd"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.140736 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g6xxs" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.155773 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skddf" podStartSLOduration=126.155747553 podStartE2EDuration="2m6.155747553s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.140979825 +0000 UTC m=+146.111213980" watchObservedRunningTime="2025-11-28 06:50:23.155747553 +0000 UTC m=+146.125981708" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.158940 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" event={"ID":"30b525af-4632-4fe9-bdd7-6ca436cedeb7","Type":"ContainerStarted","Data":"de2023ae30db446fabeecaa80821cf474ce7944a39289d367c9234a480f7e6c3"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.159138 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" event={"ID":"30b525af-4632-4fe9-bdd7-6ca436cedeb7","Type":"ContainerStarted","Data":"816cbfc4f35dd1888656f374282138f7a58e87e2be2ad0c30e932ddff1fb24b2"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.169252 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" podStartSLOduration=126.169220669 podStartE2EDuration="2m6.169220669s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.100050682 +0000 UTC m=+146.070284847" watchObservedRunningTime="2025-11-28 06:50:23.169220669 +0000 UTC m=+146.139454824" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.173739 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r2h7q" event={"ID":"9505fe40-d6f4-40f5-b555-486eddeeefd5","Type":"ContainerStarted","Data":"413dd1e8e83c7395355d51a977714b1f35df6ec9e69bc76c3d9ece309e0f4802"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.174257 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhttl"] Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.176251 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gmplj" podStartSLOduration=126.176229795 podStartE2EDuration="2m6.176229795s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.175028587 +0000 UTC m=+146.145262742" watchObservedRunningTime="2025-11-28 06:50:23.176229795 +0000 UTC m=+146.146463950" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.190661 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:23 crc kubenswrapper[4889]: E1128 06:50:23.194251 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.694215887 +0000 UTC m=+146.664450182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.231806 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" event={"ID":"fedcbacb-0096-4b5f-83da-23a7af142d37","Type":"ContainerStarted","Data":"e668a766cb3ceaed44254078f8f9cd8375b75e1ca601b9fd341dca6cc639a194"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.263807 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" podStartSLOduration=126.263790347 podStartE2EDuration="2m6.263790347s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.261431081 +0000 UTC m=+146.231665236" watchObservedRunningTime="2025-11-28 06:50:23.263790347 +0000 UTC m=+146.234024502" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.266275 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" event={"ID":"7b21e995-d113-4b15-b59e-1ba217a862bc","Type":"ContainerStarted","Data":"f89c56fc953471e69357b5befe6d068d13dc259766bd5430ed10fdcf03058080"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.280274 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" event={"ID":"483479b4-efee-46d9-b4b9-c126ea3280df","Type":"ContainerStarted","Data":"6e18306ba8e438ae870518f18294545a0765bf5442bda56e635d758011fc05d5"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.298972 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.308758 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v55wm" event={"ID":"faf42878-840c-430d-b687-9a45b056b3b4","Type":"ContainerStarted","Data":"43406c696e73173793509d3c3b028915b8594ababe884adc19f77dbfcb29e3a6"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.309287 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v55wm" event={"ID":"faf42878-840c-430d-b687-9a45b056b3b4","Type":"ContainerStarted","Data":"983201d2a36b44806f5d810665083f234135bb5ba2083bfd5a63d16f782024c2"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.309747 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.306682 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" podStartSLOduration=126.306657593 podStartE2EDuration="2m6.306657593s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.305583489 +0000 UTC m=+146.275817664" watchObservedRunningTime="2025-11-28 06:50:23.306657593 +0000 UTC m=+146.276891758" Nov 28 06:50:23 crc kubenswrapper[4889]: E1128 06:50:23.317913 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.817891647 +0000 UTC m=+146.788126012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.362841 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.371408 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.371686 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" event={"ID":"00834fa5-849b-48d5-984e-7526dc4f71b4","Type":"ContainerStarted","Data":"015d590d187a1cbd037cf08ea45ad6f0ec8c2667f4aca7f044faacf9ee7cc03f"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.371803 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkwg6" event={"ID":"26337077-30a6-4855-9c14-4b0bece1353e","Type":"ContainerStarted","Data":"2b4469d306e3b0ef82f2b2bc24423f5689ff59250c87c82a761874d7ba972cd8"} Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.383135 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" podStartSLOduration=126.383111586 podStartE2EDuration="2m6.383111586s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.38229567 +0000 UTC m=+146.352529845" watchObservedRunningTime="2025-11-28 06:50:23.383111586 +0000 UTC m=+146.353345741" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.386605 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vbw2g" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.388968 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.396827 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cc6md" podStartSLOduration=126.396805239 podStartE2EDuration="2m6.396805239s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.353373344 +0000 UTC m=+146.323607509" watchObservedRunningTime="2025-11-28 06:50:23.396805239 +0000 UTC m=+146.367039394" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.407487 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.433420 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:23 crc kubenswrapper[4889]: E1128 06:50:23.435003 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:23.934983993 +0000 UTC m=+146.905218148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.448199 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:23 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:23 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:23 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.448253 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.463307 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-v55wm" podStartSLOduration=10.463285369 podStartE2EDuration="10.463285369s" podCreationTimestamp="2025-11-28 06:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:23.415336688 +0000 UTC m=+146.385570843" watchObservedRunningTime="2025-11-28 06:50:23.463285369 +0000 UTC m=+146.433519514" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.464128 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxh5k"] Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.539300 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:23 crc kubenswrapper[4889]: E1128 06:50:23.539730 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.03970038 +0000 UTC m=+147.009934535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.647290 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:23 crc kubenswrapper[4889]: E1128 06:50:23.647985 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.147965611 +0000 UTC m=+147.118199766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.681343 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8gg9"] Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.696015 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t2jgg" Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.799490 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:23 crc kubenswrapper[4889]: E1128 06:50:23.800015 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.299998447 +0000 UTC m=+147.270232592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:23 crc kubenswrapper[4889]: I1128 06:50:23.900127 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:23 crc kubenswrapper[4889]: E1128 06:50:23.900612 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.40059238 +0000 UTC m=+147.370826535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.005139 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:24 crc kubenswrapper[4889]: E1128 06:50:24.005926 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.505905646 +0000 UTC m=+147.476139801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.017400 4889 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.051311 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mw2sc"] Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.052328 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.057235 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.064687 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mw2sc"] Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.107288 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:24 crc kubenswrapper[4889]: E1128 06:50:24.108157 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.608135872 +0000 UTC m=+147.578370027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.209472 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.209530 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh8vg\" (UniqueName: \"kubernetes.io/projected/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-kube-api-access-bh8vg\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.209559 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-catalog-content\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.209628 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-utilities\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: E1128 06:50:24.209971 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.709951094 +0000 UTC m=+147.680185249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.253215 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xgjw2"] Nov 28 06:50:24 crc kubenswrapper[4889]: W1128 06:50:24.277599 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ab5f29_c8c6_4073_a891_e4ecf8b34189.slice/crio-2444820c2e4bffd0c3eb37137e085076daa6592028ae22c92872714c4eec064a WatchSource:0}: Error finding container 2444820c2e4bffd0c3eb37137e085076daa6592028ae22c92872714c4eec064a: Status 404 returned error can't find the container with id 2444820c2e4bffd0c3eb37137e085076daa6592028ae22c92872714c4eec064a Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.311532 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:24 crc kubenswrapper[4889]: E1128 06:50:24.311758 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.811722335 +0000 UTC m=+147.781956490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.311887 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.311929 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.311976 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh8vg\" (UniqueName: \"kubernetes.io/projected/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-kube-api-access-bh8vg\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.311998 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.312018 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-catalog-content\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.312076 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.312150 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-utilities\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.312192 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:50:24 crc kubenswrapper[4889]: E1128 06:50:24.312719 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.812685557 +0000 UTC m=+147.782919712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjpk7" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.313117 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-catalog-content\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.313533 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-utilities\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.313610 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.325149 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.327773 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.331368 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.347906 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh8vg\" (UniqueName: \"kubernetes.io/projected/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-kube-api-access-bh8vg\") pod \"redhat-marketplace-mw2sc\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.353097 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.371606 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgjw2" event={"ID":"21ab5f29-c8c6-4073-a891-e4ecf8b34189","Type":"ContainerStarted","Data":"2444820c2e4bffd0c3eb37137e085076daa6592028ae22c92872714c4eec064a"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.373129 4889 generic.go:334] "Generic (PLEG): container finished" podID="60853e4e-b79e-4597-84fe-a051efbbeaff" containerID="45871325a83f347df1090dde68b84a17c23efee03ca683cdd746860926289fc1" exitCode=0 Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.373186 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" event={"ID":"60853e4e-b79e-4597-84fe-a051efbbeaff","Type":"ContainerDied","Data":"45871325a83f347df1090dde68b84a17c23efee03ca683cdd746860926289fc1"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.374556 4889 generic.go:334] "Generic (PLEG): container finished" podID="bbed2470-346a-4721-ab26-d098ee16a4af" containerID="50ed9263de170f589aee0ae53569924a4c61b6cdc125318e43dc75b1dde35173" exitCode=0 Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.374599 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8gg9" event={"ID":"bbed2470-346a-4721-ab26-d098ee16a4af","Type":"ContainerDied","Data":"50ed9263de170f589aee0ae53569924a4c61b6cdc125318e43dc75b1dde35173"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.374640 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8gg9" event={"ID":"bbed2470-346a-4721-ab26-d098ee16a4af","Type":"ContainerStarted","Data":"8213c9f8a202b063208276b8dab9c0eb7b5ee4b1fbe662c4a0d7204dc655c60c"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.376458 4889 generic.go:334] "Generic (PLEG): container finished" podID="e1c17912-a129-45b4-b833-04493886c507" containerID="fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993" exitCode=0 Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.376481 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.376500 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhttl" event={"ID":"e1c17912-a129-45b4-b833-04493886c507","Type":"ContainerDied","Data":"fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.376531 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhttl" event={"ID":"e1c17912-a129-45b4-b833-04493886c507","Type":"ContainerStarted","Data":"5ca4bc7f70088fda546554ea2d01568caf73b05893484d0895005fb88c876486"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.390493 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" event={"ID":"fedcbacb-0096-4b5f-83da-23a7af142d37","Type":"ContainerStarted","Data":"5ec2d6b5ece40f9f23ca9d06a387178df102d0ef3f0306145c3c22f5afa72c84"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.390557 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" event={"ID":"fedcbacb-0096-4b5f-83da-23a7af142d37","Type":"ContainerStarted","Data":"b5a37a756cc7a7c05be96dc2dd939fb7937fa9b7ec47c391cddf04676d97280d"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.408180 4889 generic.go:334] "Generic (PLEG): container finished" podID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerID="6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d" exitCode=0 Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.409906 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxh5k" event={"ID":"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b","Type":"ContainerDied","Data":"6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.409949 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxh5k" event={"ID":"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b","Type":"ContainerStarted","Data":"b1634f6a4135dafb6282d7d187ced351e892910752b8051ca6c7afd2e88c704e"} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.412575 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:24 crc kubenswrapper[4889]: E1128 06:50:24.413407 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:50:24.913382293 +0000 UTC m=+147.883616438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.416752 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bc8m" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.426263 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.440535 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:24 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:24 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:24 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.440612 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.447248 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6qp89"] Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.448372 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.455189 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qp89"] Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.511563 4889 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T06:50:24.017431658Z","Handler":null,"Name":""} Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.517916 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.520037 4889 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.520061 4889 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.555048 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.563033 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.626391 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-catalog-content\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.626576 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lmm\" (UniqueName: \"kubernetes.io/projected/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-kube-api-access-w4lmm\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.626638 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-utilities\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.727788 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-catalog-content\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.727874 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lmm\" (UniqueName: \"kubernetes.io/projected/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-kube-api-access-w4lmm\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.727906 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-utilities\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.728343 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-utilities\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.728527 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-catalog-content\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.743023 4889 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.743085 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.763299 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lmm\" (UniqueName: \"kubernetes.io/projected/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-kube-api-access-w4lmm\") pod \"redhat-marketplace-6qp89\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.804250 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.867124 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mw2sc"] Nov 28 06:50:24 crc kubenswrapper[4889]: W1128 06:50:24.894954 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod214d7b41_e8c9_4e25_bf80_48ff31b4a29b.slice/crio-f015e658852a75869cfa267662665092d411673ad1fb9d8226ee91c031fce0fe WatchSource:0}: Error finding container f015e658852a75869cfa267662665092d411673ad1fb9d8226ee91c031fce0fe: Status 404 returned error can't find the container with id f015e658852a75869cfa267662665092d411673ad1fb9d8226ee91c031fce0fe Nov 28 06:50:24 crc kubenswrapper[4889]: I1128 06:50:24.940123 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjpk7\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:24 crc kubenswrapper[4889]: W1128 06:50:24.972756 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-ec19108baea7eab87034f90ae7ff040d83d0c9089d634b4732ac3f10fa04caf8 WatchSource:0}: Error finding container ec19108baea7eab87034f90ae7ff040d83d0c9089d634b4732ac3f10fa04caf8: Status 404 returned error can't find the container with id ec19108baea7eab87034f90ae7ff040d83d0c9089d634b4732ac3f10fa04caf8 Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.036026 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.043210 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.072983 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qp89"] Nov 28 06:50:25 crc kubenswrapper[4889]: W1128 06:50:25.081413 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc552cb6_fad3_4a78_a7b5_c3b81c4336a7.slice/crio-15b6028c32924888bbe211024604533076b0c7402a9eb1f0cf35e9f1277fe167 WatchSource:0}: Error finding container 15b6028c32924888bbe211024604533076b0c7402a9eb1f0cf35e9f1277fe167: Status 404 returned error can't find the container with id 15b6028c32924888bbe211024604533076b0c7402a9eb1f0cf35e9f1277fe167 Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.133257 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.374308 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.420590 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qp89" event={"ID":"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7","Type":"ContainerStarted","Data":"34048bdf5b8e5094f95d28e9a257df1c9f5d6918e4be2588a7be589f64029804"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.420662 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qp89" event={"ID":"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7","Type":"ContainerStarted","Data":"15b6028c32924888bbe211024604533076b0c7402a9eb1f0cf35e9f1277fe167"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.435222 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e8b3d1d5bda2d836b5db901d2cb3c0e0138d4f71817b645b519e0c152620e6c6"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.443162 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:25 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:25 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:25 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.443232 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.458850 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"800db3c006edd216e427b2fc99e60ef4da073f7063b00dedd29711dccc865248"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.459262 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ec19108baea7eab87034f90ae7ff040d83d0c9089d634b4732ac3f10fa04caf8"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.459949 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.462196 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8f887"] Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.463515 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.464904 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mw2sc" event={"ID":"214d7b41-e8c9-4e25-bf80-48ff31b4a29b","Type":"ContainerStarted","Data":"2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.465032 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mw2sc" event={"ID":"214d7b41-e8c9-4e25-bf80-48ff31b4a29b","Type":"ContainerStarted","Data":"f015e658852a75869cfa267662665092d411673ad1fb9d8226ee91c031fce0fe"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.466750 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.477943 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzbx\" (UniqueName: \"kubernetes.io/projected/33bff935-df7b-4a61-8cab-84f408c1c9de-kube-api-access-jgzbx\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.478009 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-catalog-content\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.478128 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-utilities\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.480984 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f1f064419f1915ff0b41b100aafc97075fb1f6797d25f145a8d6a0396e46a7f8"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.481036 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f3aa06aca3fb4d9f8df497cf3fa80b46a787c7c5357555f5646ebba0b45ffbbd"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.492215 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjpk7"] Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.502804 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8f887"] Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.503696 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" event={"ID":"fedcbacb-0096-4b5f-83da-23a7af142d37","Type":"ContainerStarted","Data":"1769d3f6cd8a98e53b8746fc657c5b91a18fd973d9ffc7548b2f7b997f7fe438"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.506244 4889 generic.go:334] "Generic (PLEG): container finished" podID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerID="5be5680e5b810a147ee0996e60bd064c858b8aefe134e225fc85d496cf7782d1" exitCode=0 Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.507734 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgjw2" event={"ID":"21ab5f29-c8c6-4073-a891-e4ecf8b34189","Type":"ContainerDied","Data":"5be5680e5b810a147ee0996e60bd064c858b8aefe134e225fc85d496cf7782d1"} Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.579273 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzbx\" (UniqueName: \"kubernetes.io/projected/33bff935-df7b-4a61-8cab-84f408c1c9de-kube-api-access-jgzbx\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.579324 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-catalog-content\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.579388 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-utilities\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.579996 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-utilities\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.581503 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-catalog-content\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.620590 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gxwdj" podStartSLOduration=12.620560333 podStartE2EDuration="12.620560333s" podCreationTimestamp="2025-11-28 06:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:25.605274498 +0000 UTC m=+148.575508653" watchObservedRunningTime="2025-11-28 06:50:25.620560333 +0000 UTC m=+148.590794488" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.664191 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzbx\" (UniqueName: \"kubernetes.io/projected/33bff935-df7b-4a61-8cab-84f408c1c9de-kube-api-access-jgzbx\") pod \"redhat-operators-8f887\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.784965 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.787034 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.802979 4889 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ptzsm container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]log ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]etcd ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/max-in-flight-filter ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 28 06:50:25 crc kubenswrapper[4889]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/project.openshift.io-projectcache ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/openshift.io-startinformers ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 28 06:50:25 crc kubenswrapper[4889]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 06:50:25 crc kubenswrapper[4889]: livez check failed Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.803072 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" podUID="30b525af-4632-4fe9-bdd7-6ca436cedeb7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.849695 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qpsrw"] Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.851012 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.871879 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.882123 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpsrw"] Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.888691 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.988452 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvqqk\" (UniqueName: \"kubernetes.io/projected/60853e4e-b79e-4597-84fe-a051efbbeaff-kube-api-access-qvqqk\") pod \"60853e4e-b79e-4597-84fe-a051efbbeaff\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.988981 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60853e4e-b79e-4597-84fe-a051efbbeaff-secret-volume\") pod \"60853e4e-b79e-4597-84fe-a051efbbeaff\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.989633 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60853e4e-b79e-4597-84fe-a051efbbeaff-config-volume\") pod \"60853e4e-b79e-4597-84fe-a051efbbeaff\" (UID: \"60853e4e-b79e-4597-84fe-a051efbbeaff\") " Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.989935 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-utilities\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.990040 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-catalog-content\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.990124 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbmmq\" (UniqueName: \"kubernetes.io/projected/6ee34603-f895-4e08-88d2-dc04ac976df1-kube-api-access-hbmmq\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.990636 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60853e4e-b79e-4597-84fe-a051efbbeaff-config-volume" (OuterVolumeSpecName: "config-volume") pod "60853e4e-b79e-4597-84fe-a051efbbeaff" (UID: "60853e4e-b79e-4597-84fe-a051efbbeaff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.998822 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60853e4e-b79e-4597-84fe-a051efbbeaff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60853e4e-b79e-4597-84fe-a051efbbeaff" (UID: "60853e4e-b79e-4597-84fe-a051efbbeaff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:50:25 crc kubenswrapper[4889]: I1128 06:50:25.999098 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60853e4e-b79e-4597-84fe-a051efbbeaff-kube-api-access-qvqqk" (OuterVolumeSpecName: "kube-api-access-qvqqk") pod "60853e4e-b79e-4597-84fe-a051efbbeaff" (UID: "60853e4e-b79e-4597-84fe-a051efbbeaff"). InnerVolumeSpecName "kube-api-access-qvqqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.091802 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-catalog-content\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.091891 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbmmq\" (UniqueName: \"kubernetes.io/projected/6ee34603-f895-4e08-88d2-dc04ac976df1-kube-api-access-hbmmq\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.091930 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-utilities\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.091979 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvqqk\" (UniqueName: \"kubernetes.io/projected/60853e4e-b79e-4597-84fe-a051efbbeaff-kube-api-access-qvqqk\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.091992 4889 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60853e4e-b79e-4597-84fe-a051efbbeaff-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.092002 4889 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60853e4e-b79e-4597-84fe-a051efbbeaff-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.092603 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-utilities\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.092869 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-catalog-content\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.112825 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbmmq\" (UniqueName: \"kubernetes.io/projected/6ee34603-f895-4e08-88d2-dc04ac976df1-kube-api-access-hbmmq\") pod \"redhat-operators-qpsrw\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.173512 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8f887"] Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.187117 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:50:26 crc kubenswrapper[4889]: W1128 06:50:26.196979 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33bff935_df7b_4a61_8cab_84f408c1c9de.slice/crio-5020b5ced62aa41ef56bfb0ed71a12ee9cb6b73db2b3ffe1e9f79aa599770f26 WatchSource:0}: Error finding container 5020b5ced62aa41ef56bfb0ed71a12ee9cb6b73db2b3ffe1e9f79aa599770f26: Status 404 returned error can't find the container with id 5020b5ced62aa41ef56bfb0ed71a12ee9cb6b73db2b3ffe1e9f79aa599770f26 Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.365900 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:50:26 crc kubenswrapper[4889]: E1128 06:50:26.366785 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60853e4e-b79e-4597-84fe-a051efbbeaff" containerName="collect-profiles" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.366801 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="60853e4e-b79e-4597-84fe-a051efbbeaff" containerName="collect-profiles" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.366988 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="60853e4e-b79e-4597-84fe-a051efbbeaff" containerName="collect-profiles" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.371960 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.374426 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.374683 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.412315 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.436922 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:26 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:26 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:26 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.437058 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.501667 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/708ece54-89b6-4be0-b427-4d84f92dab13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"708ece54-89b6-4be0-b427-4d84f92dab13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.501742 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/708ece54-89b6-4be0-b427-4d84f92dab13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"708ece54-89b6-4be0-b427-4d84f92dab13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.567355 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" event={"ID":"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d","Type":"ContainerStarted","Data":"58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c"} Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.567439 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" event={"ID":"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d","Type":"ContainerStarted","Data":"15cae44dd65b563875415af82b31eb6cc86794d3744abf6651551a5e359738e3"} Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.569066 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.576405 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f887" event={"ID":"33bff935-df7b-4a61-8cab-84f408c1c9de","Type":"ContainerStarted","Data":"3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea"} Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.576452 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f887" event={"ID":"33bff935-df7b-4a61-8cab-84f408c1c9de","Type":"ContainerStarted","Data":"5020b5ced62aa41ef56bfb0ed71a12ee9cb6b73db2b3ffe1e9f79aa599770f26"} Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.582808 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" event={"ID":"60853e4e-b79e-4597-84fe-a051efbbeaff","Type":"ContainerDied","Data":"e9d95f99ac3b70d08ca1c55090d165d78776b498697e7fc02f662b91f7b2bfee"} Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.582869 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9d95f99ac3b70d08ca1c55090d165d78776b498697e7fc02f662b91f7b2bfee" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.582961 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-58zz9" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.594476 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" podStartSLOduration=129.594452628 podStartE2EDuration="2m9.594452628s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:50:26.590211921 +0000 UTC m=+149.560446076" watchObservedRunningTime="2025-11-28 06:50:26.594452628 +0000 UTC m=+149.564686783" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.600942 4889 generic.go:334] "Generic (PLEG): container finished" podID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerID="34048bdf5b8e5094f95d28e9a257df1c9f5d6918e4be2588a7be589f64029804" exitCode=0 Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.602926 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qp89" event={"ID":"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7","Type":"ContainerDied","Data":"34048bdf5b8e5094f95d28e9a257df1c9f5d6918e4be2588a7be589f64029804"} Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.604509 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/708ece54-89b6-4be0-b427-4d84f92dab13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"708ece54-89b6-4be0-b427-4d84f92dab13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.604566 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/708ece54-89b6-4be0-b427-4d84f92dab13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"708ece54-89b6-4be0-b427-4d84f92dab13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.604664 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/708ece54-89b6-4be0-b427-4d84f92dab13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"708ece54-89b6-4be0-b427-4d84f92dab13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.639132 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/708ece54-89b6-4be0-b427-4d84f92dab13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"708ece54-89b6-4be0-b427-4d84f92dab13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.665220 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8fcf0d45430069da9bf1933ba3be06593d3d982bbd8bb034b56b7b4749eb3aee"} Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.678522 4889 generic.go:334] "Generic (PLEG): container finished" podID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerID="2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc" exitCode=0 Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.679304 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mw2sc" event={"ID":"214d7b41-e8c9-4e25-bf80-48ff31b4a29b","Type":"ContainerDied","Data":"2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc"} Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.713487 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.863831 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpsrw"] Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.870827 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-bw9t7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.870914 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.870946 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.871639 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.871656 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-bw9t7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.871681 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bw9t7" podUID="8789adc8-7db9-46c9-994b-b5be723cc076" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.871696 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.871739 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bw9t7" podUID="8789adc8-7db9-46c9-994b-b5be723cc076" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.891342 4889 patch_prober.go:28] interesting pod/console-f9d7485db-9h4ng container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.891408 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9h4ng" podUID="aca1ea5e-ae14-45a8-9a19-acaea4176a13" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 28 06:50:26 crc kubenswrapper[4889]: I1128 06:50:26.944482 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.162490 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.432244 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.435758 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:27 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:27 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:27 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.435812 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.701097 4889 generic.go:334] "Generic (PLEG): container finished" podID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerID="3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea" exitCode=0 Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.701260 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f887" event={"ID":"33bff935-df7b-4a61-8cab-84f408c1c9de","Type":"ContainerDied","Data":"3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea"} Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.707830 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"708ece54-89b6-4be0-b427-4d84f92dab13","Type":"ContainerStarted","Data":"f74174074302c17c3418d52641344d2aa344406444302baddcdb48705dea9f57"} Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.716374 4889 generic.go:334] "Generic (PLEG): container finished" podID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerID="4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f" exitCode=0 Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.718396 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpsrw" event={"ID":"6ee34603-f895-4e08-88d2-dc04ac976df1","Type":"ContainerDied","Data":"4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f"} Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.718442 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpsrw" event={"ID":"6ee34603-f895-4e08-88d2-dc04ac976df1","Type":"ContainerStarted","Data":"7cef2bf261514310463e3b436a8ee2dba5d401800a12471112217302c05a5148"} Nov 28 06:50:27 crc kubenswrapper[4889]: I1128 06:50:27.731318 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qh5b" Nov 28 06:50:28 crc kubenswrapper[4889]: I1128 06:50:28.436504 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:28 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:28 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:28 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:28 crc kubenswrapper[4889]: I1128 06:50:28.436599 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:28 crc kubenswrapper[4889]: I1128 06:50:28.734369 4889 generic.go:334] "Generic (PLEG): container finished" podID="708ece54-89b6-4be0-b427-4d84f92dab13" containerID="5318bcce67daa62f7e93f08148c04a6b2867d9bac36a181420eadfe2b0e2190b" exitCode=0 Nov 28 06:50:28 crc kubenswrapper[4889]: I1128 06:50:28.735929 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"708ece54-89b6-4be0-b427-4d84f92dab13","Type":"ContainerDied","Data":"5318bcce67daa62f7e93f08148c04a6b2867d9bac36a181420eadfe2b0e2190b"} Nov 28 06:50:28 crc kubenswrapper[4889]: I1128 06:50:28.782297 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:50:28 crc kubenswrapper[4889]: I1128 06:50:28.782697 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.436416 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:29 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:29 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:29 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.436478 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.860849 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.861782 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.873661 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.874233 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.884354 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.976341 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:29 crc kubenswrapper[4889]: I1128 06:50:29.976400 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.084617 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.084678 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.084816 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.107950 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.109253 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.185842 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/708ece54-89b6-4be0-b427-4d84f92dab13-kube-api-access\") pod \"708ece54-89b6-4be0-b427-4d84f92dab13\" (UID: \"708ece54-89b6-4be0-b427-4d84f92dab13\") " Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.185906 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/708ece54-89b6-4be0-b427-4d84f92dab13-kubelet-dir\") pod \"708ece54-89b6-4be0-b427-4d84f92dab13\" (UID: \"708ece54-89b6-4be0-b427-4d84f92dab13\") " Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.186244 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/708ece54-89b6-4be0-b427-4d84f92dab13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "708ece54-89b6-4be0-b427-4d84f92dab13" (UID: "708ece54-89b6-4be0-b427-4d84f92dab13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.191212 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708ece54-89b6-4be0-b427-4d84f92dab13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "708ece54-89b6-4be0-b427-4d84f92dab13" (UID: "708ece54-89b6-4be0-b427-4d84f92dab13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.198447 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.287281 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/708ece54-89b6-4be0-b427-4d84f92dab13-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.287817 4889 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/708ece54-89b6-4be0-b427-4d84f92dab13-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.449045 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:30 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:30 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:30 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.449116 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.567760 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:50:30 crc kubenswrapper[4889]: W1128 06:50:30.607247 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2b2896a2_8a59_4205_9c66_6b8cc4bd222b.slice/crio-f9a302d6da74ce0dba9f095f13adfbd156edc950e4be586a2cdd57e609e9f466 WatchSource:0}: Error finding container f9a302d6da74ce0dba9f095f13adfbd156edc950e4be586a2cdd57e609e9f466: Status 404 returned error can't find the container with id f9a302d6da74ce0dba9f095f13adfbd156edc950e4be586a2cdd57e609e9f466 Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.757189 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.757193 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"708ece54-89b6-4be0-b427-4d84f92dab13","Type":"ContainerDied","Data":"f74174074302c17c3418d52641344d2aa344406444302baddcdb48705dea9f57"} Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.757329 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74174074302c17c3418d52641344d2aa344406444302baddcdb48705dea9f57" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.765608 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2b2896a2-8a59-4205-9c66-6b8cc4bd222b","Type":"ContainerStarted","Data":"f9a302d6da74ce0dba9f095f13adfbd156edc950e4be586a2cdd57e609e9f466"} Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.787829 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:30 crc kubenswrapper[4889]: I1128 06:50:30.793118 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ptzsm" Nov 28 06:50:31 crc kubenswrapper[4889]: I1128 06:50:31.436519 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:31 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:31 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:31 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:31 crc kubenswrapper[4889]: I1128 06:50:31.436928 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:32 crc kubenswrapper[4889]: I1128 06:50:32.434188 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:32 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:32 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:32 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:32 crc kubenswrapper[4889]: I1128 06:50:32.435447 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:32 crc kubenswrapper[4889]: I1128 06:50:32.881426 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-v55wm" Nov 28 06:50:33 crc kubenswrapper[4889]: I1128 06:50:33.434255 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:33 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:33 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:33 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:33 crc kubenswrapper[4889]: I1128 06:50:33.434320 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:34 crc kubenswrapper[4889]: I1128 06:50:34.434211 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:34 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:34 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:34 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:34 crc kubenswrapper[4889]: I1128 06:50:34.434272 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:34 crc kubenswrapper[4889]: I1128 06:50:34.807932 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2b2896a2-8a59-4205-9c66-6b8cc4bd222b","Type":"ContainerStarted","Data":"c5586171ce651c848256204a74ac530c29e090e9332368261b1f5b90b4b030ab"} Nov 28 06:50:35 crc kubenswrapper[4889]: I1128 06:50:35.434884 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:35 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:35 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:35 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:35 crc kubenswrapper[4889]: I1128 06:50:35.434943 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:36 crc kubenswrapper[4889]: I1128 06:50:36.434757 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:36 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:36 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:36 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:36 crc kubenswrapper[4889]: I1128 06:50:36.434827 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:36 crc kubenswrapper[4889]: I1128 06:50:36.833775 4889 generic.go:334] "Generic (PLEG): container finished" podID="2b2896a2-8a59-4205-9c66-6b8cc4bd222b" containerID="c5586171ce651c848256204a74ac530c29e090e9332368261b1f5b90b4b030ab" exitCode=0 Nov 28 06:50:36 crc kubenswrapper[4889]: I1128 06:50:36.834787 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2b2896a2-8a59-4205-9c66-6b8cc4bd222b","Type":"ContainerDied","Data":"c5586171ce651c848256204a74ac530c29e090e9332368261b1f5b90b4b030ab"} Nov 28 06:50:36 crc kubenswrapper[4889]: I1128 06:50:36.871254 4889 patch_prober.go:28] interesting pod/console-f9d7485db-9h4ng container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 28 06:50:36 crc kubenswrapper[4889]: I1128 06:50:36.871314 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9h4ng" podUID="aca1ea5e-ae14-45a8-9a19-acaea4176a13" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 28 06:50:36 crc kubenswrapper[4889]: I1128 06:50:36.879896 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bw9t7" Nov 28 06:50:37 crc kubenswrapper[4889]: I1128 06:50:37.436844 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:37 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:37 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:37 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:37 crc kubenswrapper[4889]: I1128 06:50:37.436917 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:38 crc kubenswrapper[4889]: I1128 06:50:38.434462 4889 patch_prober.go:28] interesting pod/router-default-5444994796-tsgkn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:50:38 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Nov 28 06:50:38 crc kubenswrapper[4889]: [+]process-running ok Nov 28 06:50:38 crc kubenswrapper[4889]: healthz check failed Nov 28 06:50:38 crc kubenswrapper[4889]: I1128 06:50:38.434896 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tsgkn" podUID="06eb8e8a-2974-4453-a266-988fe75852d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:50:39 crc kubenswrapper[4889]: I1128 06:50:39.157278 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:39 crc kubenswrapper[4889]: I1128 06:50:39.163532 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e209e335-9f44-41a8-a8f2-093d2bdcfe6b-metrics-certs\") pod \"network-metrics-daemon-mbrtc\" (UID: \"e209e335-9f44-41a8-a8f2-093d2bdcfe6b\") " pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:39 crc kubenswrapper[4889]: I1128 06:50:39.363510 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mbrtc" Nov 28 06:50:39 crc kubenswrapper[4889]: I1128 06:50:39.822164 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:39 crc kubenswrapper[4889]: I1128 06:50:39.827221 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tsgkn" Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.142035 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.495951 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.664258 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kube-api-access\") pod \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\" (UID: \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\") " Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.664939 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kubelet-dir\") pod \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\" (UID: \"2b2896a2-8a59-4205-9c66-6b8cc4bd222b\") " Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.665153 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2b2896a2-8a59-4205-9c66-6b8cc4bd222b" (UID: "2b2896a2-8a59-4205-9c66-6b8cc4bd222b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.665529 4889 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.670950 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2b2896a2-8a59-4205-9c66-6b8cc4bd222b" (UID: "2b2896a2-8a59-4205-9c66-6b8cc4bd222b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.766826 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b2896a2-8a59-4205-9c66-6b8cc4bd222b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.901898 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2b2896a2-8a59-4205-9c66-6b8cc4bd222b","Type":"ContainerDied","Data":"f9a302d6da74ce0dba9f095f13adfbd156edc950e4be586a2cdd57e609e9f466"} Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.901974 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a302d6da74ce0dba9f095f13adfbd156edc950e4be586a2cdd57e609e9f466" Nov 28 06:50:45 crc kubenswrapper[4889]: I1128 06:50:45.902464 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:50:47 crc kubenswrapper[4889]: I1128 06:50:47.947563 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:47 crc kubenswrapper[4889]: I1128 06:50:47.952645 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 06:50:57 crc kubenswrapper[4889]: I1128 06:50:57.518299 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d2nbw" Nov 28 06:50:58 crc kubenswrapper[4889]: I1128 06:50:58.783151 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:50:58 crc kubenswrapper[4889]: I1128 06:50:58.783532 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:51:05 crc kubenswrapper[4889]: I1128 06:51:05.082604 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.758365 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:51:09 crc kubenswrapper[4889]: E1128 06:51:09.759299 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708ece54-89b6-4be0-b427-4d84f92dab13" containerName="pruner" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.759322 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="708ece54-89b6-4be0-b427-4d84f92dab13" containerName="pruner" Nov 28 06:51:09 crc kubenswrapper[4889]: E1128 06:51:09.759346 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2896a2-8a59-4205-9c66-6b8cc4bd222b" containerName="pruner" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.759359 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2896a2-8a59-4205-9c66-6b8cc4bd222b" containerName="pruner" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.759529 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="708ece54-89b6-4be0-b427-4d84f92dab13" containerName="pruner" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.759567 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2896a2-8a59-4205-9c66-6b8cc4bd222b" containerName="pruner" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.760277 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.763695 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.764100 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.770239 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.842389 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d4974b-9216-4694-856d-7bd0baa4351a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b8d4974b-9216-4694-856d-7bd0baa4351a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.842492 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d4974b-9216-4694-856d-7bd0baa4351a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b8d4974b-9216-4694-856d-7bd0baa4351a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.944131 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d4974b-9216-4694-856d-7bd0baa4351a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b8d4974b-9216-4694-856d-7bd0baa4351a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.944248 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d4974b-9216-4694-856d-7bd0baa4351a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b8d4974b-9216-4694-856d-7bd0baa4351a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.944398 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d4974b-9216-4694-856d-7bd0baa4351a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b8d4974b-9216-4694-856d-7bd0baa4351a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:09 crc kubenswrapper[4889]: I1128 06:51:09.967560 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d4974b-9216-4694-856d-7bd0baa4351a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b8d4974b-9216-4694-856d-7bd0baa4351a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:10 crc kubenswrapper[4889]: I1128 06:51:10.093684 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:15 crc kubenswrapper[4889]: I1128 06:51:15.759297 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:51:15 crc kubenswrapper[4889]: I1128 06:51:15.762937 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:15 crc kubenswrapper[4889]: I1128 06:51:15.780076 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:51:15 crc kubenswrapper[4889]: I1128 06:51:15.944431 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:15 crc kubenswrapper[4889]: I1128 06:51:15.944497 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-var-lock\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:15 crc kubenswrapper[4889]: I1128 06:51:15.944586 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kube-api-access\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:16 crc kubenswrapper[4889]: I1128 06:51:16.045958 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:16 crc kubenswrapper[4889]: I1128 06:51:16.046027 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-var-lock\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:16 crc kubenswrapper[4889]: I1128 06:51:16.046115 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kube-api-access\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:16 crc kubenswrapper[4889]: I1128 06:51:16.046314 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:16 crc kubenswrapper[4889]: I1128 06:51:16.046695 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-var-lock\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:16 crc kubenswrapper[4889]: I1128 06:51:16.081505 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kube-api-access\") pod \"installer-9-crc\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:16 crc kubenswrapper[4889]: I1128 06:51:16.101477 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:51:16 crc kubenswrapper[4889]: E1128 06:51:16.431162 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 06:51:16 crc kubenswrapper[4889]: E1128 06:51:16.431892 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgzbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8f887_openshift-marketplace(33bff935-df7b-4a61-8cab-84f408c1c9de): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:51:16 crc kubenswrapper[4889]: E1128 06:51:16.433214 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8f887" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" Nov 28 06:51:17 crc kubenswrapper[4889]: E1128 06:51:17.874218 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8f887" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" Nov 28 06:51:17 crc kubenswrapper[4889]: E1128 06:51:17.930843 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 06:51:17 crc kubenswrapper[4889]: E1128 06:51:17.931014 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxvlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xgjw2_openshift-marketplace(21ab5f29-c8c6-4073-a891-e4ecf8b34189): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:51:17 crc kubenswrapper[4889]: E1128 06:51:17.932221 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xgjw2" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" Nov 28 06:51:19 crc kubenswrapper[4889]: E1128 06:51:19.357160 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xgjw2" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" Nov 28 06:51:19 crc kubenswrapper[4889]: E1128 06:51:19.429731 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 06:51:19 crc kubenswrapper[4889]: E1128 06:51:19.429939 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhf9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t8gg9_openshift-marketplace(bbed2470-346a-4721-ab26-d098ee16a4af): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:51:19 crc kubenswrapper[4889]: E1128 06:51:19.431213 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-t8gg9" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.131880 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-t8gg9" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.210801 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.210993 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4kss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vhttl_openshift-marketplace(e1c17912-a129-45b4-b833-04493886c507): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.213105 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vhttl" podUID="e1c17912-a129-45b4-b833-04493886c507" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.253174 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.253545 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhw2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rxh5k_openshift-marketplace(95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.254759 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rxh5k" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.263316 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.263439 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bh8vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mw2sc_openshift-marketplace(214d7b41-e8c9-4e25-bf80-48ff31b4a29b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.264593 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mw2sc" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.293750 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.293944 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4lmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6qp89_openshift-marketplace(cc552cb6-fad3-4a78-a7b5-c3b81c4336a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.295211 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6qp89" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.315146 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.315297 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbmmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qpsrw_openshift-marketplace(6ee34603-f895-4e08-88d2-dc04ac976df1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:51:20 crc kubenswrapper[4889]: E1128 06:51:20.316688 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qpsrw" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" Nov 28 06:51:20 crc kubenswrapper[4889]: I1128 06:51:20.576800 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:51:20 crc kubenswrapper[4889]: I1128 06:51:20.579537 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mbrtc"] Nov 28 06:51:20 crc kubenswrapper[4889]: I1128 06:51:20.582197 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:51:20 crc kubenswrapper[4889]: W1128 06:51:20.596378 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode209e335_9f44_41a8_a8f2_093d2bdcfe6b.slice/crio-affc034c6cdc9db8b547c574413ab545a3faa19f5450a6d96c78cbfd98ffaab6 WatchSource:0}: Error finding container affc034c6cdc9db8b547c574413ab545a3faa19f5450a6d96c78cbfd98ffaab6: Status 404 returned error can't find the container with id affc034c6cdc9db8b547c574413ab545a3faa19f5450a6d96c78cbfd98ffaab6 Nov 28 06:51:21 crc kubenswrapper[4889]: I1128 06:51:21.117601 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c","Type":"ContainerStarted","Data":"4b91206cd06e0f1566b9a5c64835e7539a86ca07a82804c036e6a63ad6f94d0e"} Nov 28 06:51:21 crc kubenswrapper[4889]: I1128 06:51:21.119415 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" event={"ID":"e209e335-9f44-41a8-a8f2-093d2bdcfe6b","Type":"ContainerStarted","Data":"7351822ce332baaed8ae837003886e1fca5ebe635f6275fe9c10103f9e8a65e6"} Nov 28 06:51:21 crc kubenswrapper[4889]: I1128 06:51:21.119455 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" event={"ID":"e209e335-9f44-41a8-a8f2-093d2bdcfe6b","Type":"ContainerStarted","Data":"affc034c6cdc9db8b547c574413ab545a3faa19f5450a6d96c78cbfd98ffaab6"} Nov 28 06:51:21 crc kubenswrapper[4889]: I1128 06:51:21.120863 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b8d4974b-9216-4694-856d-7bd0baa4351a","Type":"ContainerStarted","Data":"1ef66ab856b2d8d4be4656a7b79f11676a691a9a28b5e0bbd725e79e00aea3b2"} Nov 28 06:51:21 crc kubenswrapper[4889]: I1128 06:51:21.120921 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b8d4974b-9216-4694-856d-7bd0baa4351a","Type":"ContainerStarted","Data":"a519ec9c13ae721f162aa6ae9973bb252f140ec3704aaf2043eb7e440d71b0e8"} Nov 28 06:51:21 crc kubenswrapper[4889]: E1128 06:51:21.123352 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qpsrw" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" Nov 28 06:51:21 crc kubenswrapper[4889]: E1128 06:51:21.123404 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rxh5k" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" Nov 28 06:51:21 crc kubenswrapper[4889]: E1128 06:51:21.123650 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vhttl" podUID="e1c17912-a129-45b4-b833-04493886c507" Nov 28 06:51:21 crc kubenswrapper[4889]: E1128 06:51:21.124450 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mw2sc" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" Nov 28 06:51:21 crc kubenswrapper[4889]: E1128 06:51:21.124483 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6qp89" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" Nov 28 06:51:22 crc kubenswrapper[4889]: I1128 06:51:22.128930 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c","Type":"ContainerStarted","Data":"2df6268fbd69adac311306893fdf1931679b78013d672f716ba605f5751bfb62"} Nov 28 06:51:22 crc kubenswrapper[4889]: I1128 06:51:22.132015 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mbrtc" event={"ID":"e209e335-9f44-41a8-a8f2-093d2bdcfe6b","Type":"ContainerStarted","Data":"e3c84c6ef56396849ba227d90550431d41cc2b638174f7c0c885d7fa1effbbdb"} Nov 28 06:51:22 crc kubenswrapper[4889]: I1128 06:51:22.133865 4889 generic.go:334] "Generic (PLEG): container finished" podID="b8d4974b-9216-4694-856d-7bd0baa4351a" containerID="1ef66ab856b2d8d4be4656a7b79f11676a691a9a28b5e0bbd725e79e00aea3b2" exitCode=0 Nov 28 06:51:22 crc kubenswrapper[4889]: I1128 06:51:22.133905 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b8d4974b-9216-4694-856d-7bd0baa4351a","Type":"ContainerDied","Data":"1ef66ab856b2d8d4be4656a7b79f11676a691a9a28b5e0bbd725e79e00aea3b2"} Nov 28 06:51:22 crc kubenswrapper[4889]: I1128 06:51:22.156420 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.15635432 podStartE2EDuration="7.15635432s" podCreationTimestamp="2025-11-28 06:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:51:22.153313197 +0000 UTC m=+205.123547352" watchObservedRunningTime="2025-11-28 06:51:22.15635432 +0000 UTC m=+205.126588515" Nov 28 06:51:22 crc kubenswrapper[4889]: I1128 06:51:22.176852 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mbrtc" podStartSLOduration=185.176818517 podStartE2EDuration="3m5.176818517s" podCreationTimestamp="2025-11-28 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:51:22.175849314 +0000 UTC m=+205.146083469" watchObservedRunningTime="2025-11-28 06:51:22.176818517 +0000 UTC m=+205.147052682" Nov 28 06:51:23 crc kubenswrapper[4889]: I1128 06:51:23.360795 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:23 crc kubenswrapper[4889]: I1128 06:51:23.549145 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d4974b-9216-4694-856d-7bd0baa4351a-kubelet-dir\") pod \"b8d4974b-9216-4694-856d-7bd0baa4351a\" (UID: \"b8d4974b-9216-4694-856d-7bd0baa4351a\") " Nov 28 06:51:23 crc kubenswrapper[4889]: I1128 06:51:23.549255 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d4974b-9216-4694-856d-7bd0baa4351a-kube-api-access\") pod \"b8d4974b-9216-4694-856d-7bd0baa4351a\" (UID: \"b8d4974b-9216-4694-856d-7bd0baa4351a\") " Nov 28 06:51:23 crc kubenswrapper[4889]: I1128 06:51:23.549347 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d4974b-9216-4694-856d-7bd0baa4351a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b8d4974b-9216-4694-856d-7bd0baa4351a" (UID: "b8d4974b-9216-4694-856d-7bd0baa4351a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:51:23 crc kubenswrapper[4889]: I1128 06:51:23.549602 4889 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d4974b-9216-4694-856d-7bd0baa4351a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:23 crc kubenswrapper[4889]: I1128 06:51:23.557104 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d4974b-9216-4694-856d-7bd0baa4351a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b8d4974b-9216-4694-856d-7bd0baa4351a" (UID: "b8d4974b-9216-4694-856d-7bd0baa4351a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:51:23 crc kubenswrapper[4889]: I1128 06:51:23.651145 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d4974b-9216-4694-856d-7bd0baa4351a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:24 crc kubenswrapper[4889]: I1128 06:51:24.148294 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b8d4974b-9216-4694-856d-7bd0baa4351a","Type":"ContainerDied","Data":"a519ec9c13ae721f162aa6ae9973bb252f140ec3704aaf2043eb7e440d71b0e8"} Nov 28 06:51:24 crc kubenswrapper[4889]: I1128 06:51:24.148356 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a519ec9c13ae721f162aa6ae9973bb252f140ec3704aaf2043eb7e440d71b0e8" Nov 28 06:51:24 crc kubenswrapper[4889]: I1128 06:51:24.148358 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:51:28 crc kubenswrapper[4889]: I1128 06:51:28.782606 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:51:28 crc kubenswrapper[4889]: I1128 06:51:28.783608 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:51:28 crc kubenswrapper[4889]: I1128 06:51:28.783695 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:51:28 crc kubenswrapper[4889]: I1128 06:51:28.784740 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f"} pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:51:28 crc kubenswrapper[4889]: I1128 06:51:28.784890 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" containerID="cri-o://7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f" gracePeriod=600 Nov 28 06:51:31 crc kubenswrapper[4889]: I1128 06:51:31.196821 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerID="7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f" exitCode=0 Nov 28 06:51:31 crc kubenswrapper[4889]: I1128 06:51:31.196905 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerDied","Data":"7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f"} Nov 28 06:51:31 crc kubenswrapper[4889]: I1128 06:51:31.197422 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"76dd7acd3eaf576a87373e71bc06c8f9b006b2f4d1a51df32d2690f03d71b3d5"} Nov 28 06:51:34 crc kubenswrapper[4889]: I1128 06:51:34.214858 4889 generic.go:334] "Generic (PLEG): container finished" podID="bbed2470-346a-4721-ab26-d098ee16a4af" containerID="515a9168dcbff72befeb9cedb82ec2ffec715f4f5cfc41894287d221ab9c4e64" exitCode=0 Nov 28 06:51:34 crc kubenswrapper[4889]: I1128 06:51:34.214965 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8gg9" event={"ID":"bbed2470-346a-4721-ab26-d098ee16a4af","Type":"ContainerDied","Data":"515a9168dcbff72befeb9cedb82ec2ffec715f4f5cfc41894287d221ab9c4e64"} Nov 28 06:51:34 crc kubenswrapper[4889]: I1128 06:51:34.219556 4889 generic.go:334] "Generic (PLEG): container finished" podID="e1c17912-a129-45b4-b833-04493886c507" containerID="050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d" exitCode=0 Nov 28 06:51:34 crc kubenswrapper[4889]: I1128 06:51:34.219600 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhttl" event={"ID":"e1c17912-a129-45b4-b833-04493886c507","Type":"ContainerDied","Data":"050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d"} Nov 28 06:51:35 crc kubenswrapper[4889]: I1128 06:51:35.228075 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxh5k" event={"ID":"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b","Type":"ContainerStarted","Data":"cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc"} Nov 28 06:51:35 crc kubenswrapper[4889]: I1128 06:51:35.231010 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgjw2" event={"ID":"21ab5f29-c8c6-4073-a891-e4ecf8b34189","Type":"ContainerStarted","Data":"c9bcf054731867f846c814913be083b051857419263929487b3f7007b35bd283"} Nov 28 06:51:35 crc kubenswrapper[4889]: I1128 06:51:35.233106 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f887" event={"ID":"33bff935-df7b-4a61-8cab-84f408c1c9de","Type":"ContainerStarted","Data":"813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911"} Nov 28 06:51:35 crc kubenswrapper[4889]: I1128 06:51:35.235530 4889 generic.go:334] "Generic (PLEG): container finished" podID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerID="9bfde13bd50089a5b93f8d49f3e227cd6762fdd0666c94a470a1ca62acf5b0d9" exitCode=0 Nov 28 06:51:35 crc kubenswrapper[4889]: I1128 06:51:35.235599 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qp89" event={"ID":"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7","Type":"ContainerDied","Data":"9bfde13bd50089a5b93f8d49f3e227cd6762fdd0666c94a470a1ca62acf5b0d9"} Nov 28 06:51:36 crc kubenswrapper[4889]: I1128 06:51:36.247777 4889 generic.go:334] "Generic (PLEG): container finished" podID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerID="cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc" exitCode=0 Nov 28 06:51:36 crc kubenswrapper[4889]: I1128 06:51:36.247883 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxh5k" event={"ID":"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b","Type":"ContainerDied","Data":"cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc"} Nov 28 06:51:36 crc kubenswrapper[4889]: I1128 06:51:36.250757 4889 generic.go:334] "Generic (PLEG): container finished" podID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerID="c9bcf054731867f846c814913be083b051857419263929487b3f7007b35bd283" exitCode=0 Nov 28 06:51:36 crc kubenswrapper[4889]: I1128 06:51:36.250834 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgjw2" event={"ID":"21ab5f29-c8c6-4073-a891-e4ecf8b34189","Type":"ContainerDied","Data":"c9bcf054731867f846c814913be083b051857419263929487b3f7007b35bd283"} Nov 28 06:51:36 crc kubenswrapper[4889]: I1128 06:51:36.253313 4889 generic.go:334] "Generic (PLEG): container finished" podID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerID="813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911" exitCode=0 Nov 28 06:51:36 crc kubenswrapper[4889]: I1128 06:51:36.253359 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f887" event={"ID":"33bff935-df7b-4a61-8cab-84f408c1c9de","Type":"ContainerDied","Data":"813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911"} Nov 28 06:51:43 crc kubenswrapper[4889]: I1128 06:51:43.301090 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8gg9" event={"ID":"bbed2470-346a-4721-ab26-d098ee16a4af","Type":"ContainerStarted","Data":"c90292fc8bf500d59a4c4e46249c8ec0f0efebc5132024b285dd23805777b926"} Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.307981 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mw2sc" event={"ID":"214d7b41-e8c9-4e25-bf80-48ff31b4a29b","Type":"ContainerStarted","Data":"64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25"} Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.310048 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxh5k" event={"ID":"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b","Type":"ContainerStarted","Data":"64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90"} Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.312406 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgjw2" event={"ID":"21ab5f29-c8c6-4073-a891-e4ecf8b34189","Type":"ContainerStarted","Data":"07f5c9e722ab93d1dfc89a6d81f4043dc248d1add3ad67fe4bf91d161271647a"} Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.314261 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f887" event={"ID":"33bff935-df7b-4a61-8cab-84f408c1c9de","Type":"ContainerStarted","Data":"3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783"} Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.316422 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qp89" event={"ID":"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7","Type":"ContainerStarted","Data":"ed01b877701d08669bc737e007dc08f1e3b8de9a778b47010c27878969118a97"} Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.318355 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpsrw" event={"ID":"6ee34603-f895-4e08-88d2-dc04ac976df1","Type":"ContainerStarted","Data":"52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619"} Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.321838 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhttl" event={"ID":"e1c17912-a129-45b4-b833-04493886c507","Type":"ContainerStarted","Data":"999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e"} Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.381154 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6qp89" podStartSLOduration=1.8511810789999998 podStartE2EDuration="1m20.381137385s" podCreationTimestamp="2025-11-28 06:50:24 +0000 UTC" firstStartedPulling="2025-11-28 06:50:25.423969545 +0000 UTC m=+148.394203700" lastFinishedPulling="2025-11-28 06:51:43.953925851 +0000 UTC m=+226.924160006" observedRunningTime="2025-11-28 06:51:44.362224184 +0000 UTC m=+227.332458339" watchObservedRunningTime="2025-11-28 06:51:44.381137385 +0000 UTC m=+227.351371540" Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.384076 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8f887" podStartSLOduration=3.219470322 podStartE2EDuration="1m19.384068625s" podCreationTimestamp="2025-11-28 06:50:25 +0000 UTC" firstStartedPulling="2025-11-28 06:50:27.703675488 +0000 UTC m=+150.673909643" lastFinishedPulling="2025-11-28 06:51:43.868273791 +0000 UTC m=+226.838507946" observedRunningTime="2025-11-28 06:51:44.37969078 +0000 UTC m=+227.349924925" watchObservedRunningTime="2025-11-28 06:51:44.384068625 +0000 UTC m=+227.354302780" Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.402203 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxh5k" podStartSLOduration=2.943596016 podStartE2EDuration="1m22.402187326s" podCreationTimestamp="2025-11-28 06:50:22 +0000 UTC" firstStartedPulling="2025-11-28 06:50:24.416454622 +0000 UTC m=+147.386688777" lastFinishedPulling="2025-11-28 06:51:43.875045942 +0000 UTC m=+226.845280087" observedRunningTime="2025-11-28 06:51:44.399109783 +0000 UTC m=+227.369343938" watchObservedRunningTime="2025-11-28 06:51:44.402187326 +0000 UTC m=+227.372421481" Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.438529 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vhttl" podStartSLOduration=2.806136563 podStartE2EDuration="1m22.438503811s" podCreationTimestamp="2025-11-28 06:50:22 +0000 UTC" firstStartedPulling="2025-11-28 06:50:24.390665318 +0000 UTC m=+147.360899473" lastFinishedPulling="2025-11-28 06:51:44.023032566 +0000 UTC m=+226.993266721" observedRunningTime="2025-11-28 06:51:44.419092529 +0000 UTC m=+227.389326684" watchObservedRunningTime="2025-11-28 06:51:44.438503811 +0000 UTC m=+227.408737966" Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.439423 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xgjw2" podStartSLOduration=3.8217106899999997 podStartE2EDuration="1m22.439418203s" podCreationTimestamp="2025-11-28 06:50:22 +0000 UTC" firstStartedPulling="2025-11-28 06:50:25.508773507 +0000 UTC m=+148.479007662" lastFinishedPulling="2025-11-28 06:51:44.12648102 +0000 UTC m=+227.096715175" observedRunningTime="2025-11-28 06:51:44.436255717 +0000 UTC m=+227.406489892" watchObservedRunningTime="2025-11-28 06:51:44.439418203 +0000 UTC m=+227.409652358" Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.459854 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t8gg9" podStartSLOduration=5.150144701 podStartE2EDuration="1m22.459837979s" podCreationTimestamp="2025-11-28 06:50:22 +0000 UTC" firstStartedPulling="2025-11-28 06:50:24.376198501 +0000 UTC m=+147.346432656" lastFinishedPulling="2025-11-28 06:51:41.685891779 +0000 UTC m=+224.656125934" observedRunningTime="2025-11-28 06:51:44.457681938 +0000 UTC m=+227.427916093" watchObservedRunningTime="2025-11-28 06:51:44.459837979 +0000 UTC m=+227.430072134" Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.805235 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:51:44 crc kubenswrapper[4889]: I1128 06:51:44.805365 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:51:45 crc kubenswrapper[4889]: I1128 06:51:45.329330 4889 generic.go:334] "Generic (PLEG): container finished" podID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerID="52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619" exitCode=0 Nov 28 06:51:45 crc kubenswrapper[4889]: I1128 06:51:45.329375 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpsrw" event={"ID":"6ee34603-f895-4e08-88d2-dc04ac976df1","Type":"ContainerDied","Data":"52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619"} Nov 28 06:51:45 crc kubenswrapper[4889]: I1128 06:51:45.331921 4889 generic.go:334] "Generic (PLEG): container finished" podID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerID="64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25" exitCode=0 Nov 28 06:51:45 crc kubenswrapper[4889]: I1128 06:51:45.339194 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mw2sc" event={"ID":"214d7b41-e8c9-4e25-bf80-48ff31b4a29b","Type":"ContainerDied","Data":"64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25"} Nov 28 06:51:45 crc kubenswrapper[4889]: I1128 06:51:45.871134 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6qp89" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="registry-server" probeResult="failure" output=< Nov 28 06:51:45 crc kubenswrapper[4889]: timeout: failed to connect service ":50051" within 1s Nov 28 06:51:45 crc kubenswrapper[4889]: > Nov 28 06:51:45 crc kubenswrapper[4889]: I1128 06:51:45.889734 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:51:45 crc kubenswrapper[4889]: I1128 06:51:45.890316 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:51:46 crc kubenswrapper[4889]: I1128 06:51:46.340031 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpsrw" event={"ID":"6ee34603-f895-4e08-88d2-dc04ac976df1","Type":"ContainerStarted","Data":"0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54"} Nov 28 06:51:46 crc kubenswrapper[4889]: I1128 06:51:46.342558 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mw2sc" event={"ID":"214d7b41-e8c9-4e25-bf80-48ff31b4a29b","Type":"ContainerStarted","Data":"f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299"} Nov 28 06:51:46 crc kubenswrapper[4889]: I1128 06:51:46.362492 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qpsrw" podStartSLOduration=3.22188305 podStartE2EDuration="1m21.362470729s" podCreationTimestamp="2025-11-28 06:50:25 +0000 UTC" firstStartedPulling="2025-11-28 06:50:27.759001798 +0000 UTC m=+150.729235953" lastFinishedPulling="2025-11-28 06:51:45.899589477 +0000 UTC m=+228.869823632" observedRunningTime="2025-11-28 06:51:46.360100123 +0000 UTC m=+229.330334278" watchObservedRunningTime="2025-11-28 06:51:46.362470729 +0000 UTC m=+229.332704894" Nov 28 06:51:46 crc kubenswrapper[4889]: I1128 06:51:46.382231 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mw2sc" podStartSLOduration=1.917042397 podStartE2EDuration="1m22.382211159s" podCreationTimestamp="2025-11-28 06:50:24 +0000 UTC" firstStartedPulling="2025-11-28 06:50:25.469869829 +0000 UTC m=+148.440103984" lastFinishedPulling="2025-11-28 06:51:45.935038591 +0000 UTC m=+228.905272746" observedRunningTime="2025-11-28 06:51:46.378725476 +0000 UTC m=+229.348959641" watchObservedRunningTime="2025-11-28 06:51:46.382211159 +0000 UTC m=+229.352445314" Nov 28 06:51:46 crc kubenswrapper[4889]: I1128 06:51:46.932717 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8f887" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="registry-server" probeResult="failure" output=< Nov 28 06:51:46 crc kubenswrapper[4889]: timeout: failed to connect service ":50051" within 1s Nov 28 06:51:46 crc kubenswrapper[4889]: > Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.406673 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.407607 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.469366 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.527273 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.616871 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.617829 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.656251 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.808306 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.808374 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:51:52 crc kubenswrapper[4889]: I1128 06:51:52.852033 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:51:53 crc kubenswrapper[4889]: I1128 06:51:53.036945 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:51:53 crc kubenswrapper[4889]: I1128 06:51:53.037089 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:51:53 crc kubenswrapper[4889]: I1128 06:51:53.080249 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:51:53 crc kubenswrapper[4889]: I1128 06:51:53.554212 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:51:53 crc kubenswrapper[4889]: I1128 06:51:53.556806 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:51:53 crc kubenswrapper[4889]: I1128 06:51:53.572757 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:51:54 crc kubenswrapper[4889]: I1128 06:51:54.427785 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:51:54 crc kubenswrapper[4889]: I1128 06:51:54.427850 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:51:54 crc kubenswrapper[4889]: I1128 06:51:54.468218 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:51:54 crc kubenswrapper[4889]: I1128 06:51:54.515160 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xgjw2"] Nov 28 06:51:54 crc kubenswrapper[4889]: I1128 06:51:54.534508 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:51:54 crc kubenswrapper[4889]: I1128 06:51:54.846206 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:51:54 crc kubenswrapper[4889]: I1128 06:51:54.881806 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:51:55 crc kubenswrapper[4889]: I1128 06:51:55.113475 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8gg9"] Nov 28 06:51:55 crc kubenswrapper[4889]: I1128 06:51:55.505760 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t8gg9" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" containerName="registry-server" containerID="cri-o://c90292fc8bf500d59a4c4e46249c8ec0f0efebc5132024b285dd23805777b926" gracePeriod=2 Nov 28 06:51:55 crc kubenswrapper[4889]: I1128 06:51:55.949366 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.003917 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.187629 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.188096 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.238299 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.417641 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mgdw9"] Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.517176 4889 generic.go:334] "Generic (PLEG): container finished" podID="bbed2470-346a-4721-ab26-d098ee16a4af" containerID="c90292fc8bf500d59a4c4e46249c8ec0f0efebc5132024b285dd23805777b926" exitCode=0 Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.517306 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8gg9" event={"ID":"bbed2470-346a-4721-ab26-d098ee16a4af","Type":"ContainerDied","Data":"c90292fc8bf500d59a4c4e46249c8ec0f0efebc5132024b285dd23805777b926"} Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.517785 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xgjw2" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerName="registry-server" containerID="cri-o://07f5c9e722ab93d1dfc89a6d81f4043dc248d1add3ad67fe4bf91d161271647a" gracePeriod=2 Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.567006 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.913525 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qp89"] Nov 28 06:51:56 crc kubenswrapper[4889]: I1128 06:51:56.914431 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6qp89" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="registry-server" containerID="cri-o://ed01b877701d08669bc737e007dc08f1e3b8de9a778b47010c27878969118a97" gracePeriod=2 Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.028347 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.158178 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-utilities\") pod \"bbed2470-346a-4721-ab26-d098ee16a4af\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.158334 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-catalog-content\") pod \"bbed2470-346a-4721-ab26-d098ee16a4af\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.158363 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhf9x\" (UniqueName: \"kubernetes.io/projected/bbed2470-346a-4721-ab26-d098ee16a4af-kube-api-access-fhf9x\") pod \"bbed2470-346a-4721-ab26-d098ee16a4af\" (UID: \"bbed2470-346a-4721-ab26-d098ee16a4af\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.159501 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-utilities" (OuterVolumeSpecName: "utilities") pod "bbed2470-346a-4721-ab26-d098ee16a4af" (UID: "bbed2470-346a-4721-ab26-d098ee16a4af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.167777 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbed2470-346a-4721-ab26-d098ee16a4af-kube-api-access-fhf9x" (OuterVolumeSpecName: "kube-api-access-fhf9x") pod "bbed2470-346a-4721-ab26-d098ee16a4af" (UID: "bbed2470-346a-4721-ab26-d098ee16a4af"). InnerVolumeSpecName "kube-api-access-fhf9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.207138 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbed2470-346a-4721-ab26-d098ee16a4af" (UID: "bbed2470-346a-4721-ab26-d098ee16a4af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.259844 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.259889 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhf9x\" (UniqueName: \"kubernetes.io/projected/bbed2470-346a-4721-ab26-d098ee16a4af-kube-api-access-fhf9x\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.259928 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbed2470-346a-4721-ab26-d098ee16a4af-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.540878 4889 generic.go:334] "Generic (PLEG): container finished" podID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerID="07f5c9e722ab93d1dfc89a6d81f4043dc248d1add3ad67fe4bf91d161271647a" exitCode=0 Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.541095 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgjw2" event={"ID":"21ab5f29-c8c6-4073-a891-e4ecf8b34189","Type":"ContainerDied","Data":"07f5c9e722ab93d1dfc89a6d81f4043dc248d1add3ad67fe4bf91d161271647a"} Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.553487 4889 generic.go:334] "Generic (PLEG): container finished" podID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerID="ed01b877701d08669bc737e007dc08f1e3b8de9a778b47010c27878969118a97" exitCode=0 Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.553662 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qp89" event={"ID":"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7","Type":"ContainerDied","Data":"ed01b877701d08669bc737e007dc08f1e3b8de9a778b47010c27878969118a97"} Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.557034 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8gg9" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.557437 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8gg9" event={"ID":"bbed2470-346a-4721-ab26-d098ee16a4af","Type":"ContainerDied","Data":"8213c9f8a202b063208276b8dab9c0eb7b5ee4b1fbe662c4a0d7204dc655c60c"} Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.557474 4889 scope.go:117] "RemoveContainer" containerID="c90292fc8bf500d59a4c4e46249c8ec0f0efebc5132024b285dd23805777b926" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.578485 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8gg9"] Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.580058 4889 scope.go:117] "RemoveContainer" containerID="515a9168dcbff72befeb9cedb82ec2ffec715f4f5cfc41894287d221ab9c4e64" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.583928 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t8gg9"] Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.633130 4889 scope.go:117] "RemoveContainer" containerID="50ed9263de170f589aee0ae53569924a4c61b6cdc125318e43dc75b1dde35173" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.708033 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.767840 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-utilities\") pod \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.767894 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-catalog-content\") pod \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.767962 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxvlg\" (UniqueName: \"kubernetes.io/projected/21ab5f29-c8c6-4073-a891-e4ecf8b34189-kube-api-access-vxvlg\") pod \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\" (UID: \"21ab5f29-c8c6-4073-a891-e4ecf8b34189\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.768993 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-utilities" (OuterVolumeSpecName: "utilities") pod "21ab5f29-c8c6-4073-a891-e4ecf8b34189" (UID: "21ab5f29-c8c6-4073-a891-e4ecf8b34189"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.788735 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ab5f29-c8c6-4073-a891-e4ecf8b34189-kube-api-access-vxvlg" (OuterVolumeSpecName: "kube-api-access-vxvlg") pod "21ab5f29-c8c6-4073-a891-e4ecf8b34189" (UID: "21ab5f29-c8c6-4073-a891-e4ecf8b34189"). InnerVolumeSpecName "kube-api-access-vxvlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.820066 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21ab5f29-c8c6-4073-a891-e4ecf8b34189" (UID: "21ab5f29-c8c6-4073-a891-e4ecf8b34189"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.842971 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.868918 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-catalog-content\") pod \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.869003 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4lmm\" (UniqueName: \"kubernetes.io/projected/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-kube-api-access-w4lmm\") pod \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.869948 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-utilities\") pod \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\" (UID: \"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7\") " Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.870914 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-utilities" (OuterVolumeSpecName: "utilities") pod "cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" (UID: "cc552cb6-fad3-4a78-a7b5-c3b81c4336a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.870977 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.871016 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab5f29-c8c6-4073-a891-e4ecf8b34189-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.871053 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxvlg\" (UniqueName: \"kubernetes.io/projected/21ab5f29-c8c6-4073-a891-e4ecf8b34189-kube-api-access-vxvlg\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.873675 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-kube-api-access-w4lmm" (OuterVolumeSpecName: "kube-api-access-w4lmm") pod "cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" (UID: "cc552cb6-fad3-4a78-a7b5-c3b81c4336a7"). InnerVolumeSpecName "kube-api-access-w4lmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.909118 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" (UID: "cc552cb6-fad3-4a78-a7b5-c3b81c4336a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.972044 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.972566 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4lmm\" (UniqueName: \"kubernetes.io/projected/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-kube-api-access-w4lmm\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:57 crc kubenswrapper[4889]: I1128 06:51:57.972581 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.566510 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qp89" event={"ID":"cc552cb6-fad3-4a78-a7b5-c3b81c4336a7","Type":"ContainerDied","Data":"15b6028c32924888bbe211024604533076b0c7402a9eb1f0cf35e9f1277fe167"} Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.566587 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qp89" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.566598 4889 scope.go:117] "RemoveContainer" containerID="ed01b877701d08669bc737e007dc08f1e3b8de9a778b47010c27878969118a97" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.571548 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgjw2" event={"ID":"21ab5f29-c8c6-4073-a891-e4ecf8b34189","Type":"ContainerDied","Data":"2444820c2e4bffd0c3eb37137e085076daa6592028ae22c92872714c4eec064a"} Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.571769 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xgjw2" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.581937 4889 scope.go:117] "RemoveContainer" containerID="9bfde13bd50089a5b93f8d49f3e227cd6762fdd0666c94a470a1ca62acf5b0d9" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.597230 4889 scope.go:117] "RemoveContainer" containerID="34048bdf5b8e5094f95d28e9a257df1c9f5d6918e4be2588a7be589f64029804" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.602159 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qp89"] Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.605096 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qp89"] Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.618660 4889 scope.go:117] "RemoveContainer" containerID="07f5c9e722ab93d1dfc89a6d81f4043dc248d1add3ad67fe4bf91d161271647a" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.627262 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xgjw2"] Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.631575 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xgjw2"] Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.640529 4889 scope.go:117] "RemoveContainer" containerID="c9bcf054731867f846c814913be083b051857419263929487b3f7007b35bd283" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.655494 4889 scope.go:117] "RemoveContainer" containerID="5be5680e5b810a147ee0996e60bd064c858b8aefe134e225fc85d496cf7782d1" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.704577 4889 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705004 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705020 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705033 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705039 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705054 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerName="extract-content" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705060 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerName="extract-content" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705071 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerName="extract-utilities" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705078 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerName="extract-utilities" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705104 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" containerName="extract-utilities" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705110 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" containerName="extract-utilities" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705121 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d4974b-9216-4694-856d-7bd0baa4351a" containerName="pruner" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705127 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d4974b-9216-4694-856d-7bd0baa4351a" containerName="pruner" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705141 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="extract-content" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705148 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="extract-content" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705156 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="extract-utilities" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705162 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="extract-utilities" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705170 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" containerName="extract-content" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705176 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" containerName="extract-content" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.705185 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705190 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705345 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705359 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705372 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d4974b-9216-4694-856d-7bd0baa4351a" containerName="pruner" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705381 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" containerName="registry-server" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.705919 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.706416 4889 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.707093 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402" gracePeriod=15 Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.707195 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113" gracePeriod=15 Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.707108 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889" gracePeriod=15 Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.707130 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416" gracePeriod=15 Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.707104 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c" gracePeriod=15 Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708122 4889 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.708691 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708723 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.708737 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708746 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.708762 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708768 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.708779 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708784 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.708794 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708799 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.708816 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708823 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:51:58 crc kubenswrapper[4889]: E1128 06:51:58.708831 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708837 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708946 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708955 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708968 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708981 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708989 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.708998 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.884377 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.884436 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.884457 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.884477 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.884504 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.884564 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.884585 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.884607 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986146 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986199 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986224 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986248 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986243 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986272 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986320 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986342 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986320 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986454 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986489 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986377 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986544 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986564 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986573 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:58 crc kubenswrapper[4889]: I1128 06:51:58.986600 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.342043 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ab5f29-c8c6-4073-a891-e4ecf8b34189" path="/var/lib/kubelet/pods/21ab5f29-c8c6-4073-a891-e4ecf8b34189/volumes" Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.343277 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbed2470-346a-4721-ab26-d098ee16a4af" path="/var/lib/kubelet/pods/bbed2470-346a-4721-ab26-d098ee16a4af/volumes" Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.344413 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc552cb6-fad3-4a78-a7b5-c3b81c4336a7" path="/var/lib/kubelet/pods/cc552cb6-fad3-4a78-a7b5-c3b81c4336a7/volumes" Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.579478 4889 generic.go:334] "Generic (PLEG): container finished" podID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" containerID="2df6268fbd69adac311306893fdf1931679b78013d672f716ba605f5751bfb62" exitCode=0 Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.579536 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c","Type":"ContainerDied","Data":"2df6268fbd69adac311306893fdf1931679b78013d672f716ba605f5751bfb62"} Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.580287 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.582990 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.584083 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.585203 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c" exitCode=0 Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.585226 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889" exitCode=0 Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.585236 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416" exitCode=0 Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.585245 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113" exitCode=2 Nov 28 06:51:59 crc kubenswrapper[4889]: I1128 06:51:59.585281 4889 scope.go:117] "RemoveContainer" containerID="77dacf512593485f60cba484c06474d0422234afcc49105b42e04e913e806502" Nov 28 06:52:00 crc kubenswrapper[4889]: I1128 06:52:00.601035 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:52:00 crc kubenswrapper[4889]: I1128 06:52:00.883338 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:52:00 crc kubenswrapper[4889]: I1128 06:52:00.884338 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.019245 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-var-lock\") pod \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.019339 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kubelet-dir\") pod \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.019417 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kube-api-access\") pod \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\" (UID: \"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c\") " Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.020795 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-var-lock" (OuterVolumeSpecName: "var-lock") pod "45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" (UID: "45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.020846 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" (UID: "45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.024930 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" (UID: "45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.121572 4889 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.121599 4889 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.121607 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.560117 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.561017 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.561628 4889 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.562184 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.609835 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c","Type":"ContainerDied","Data":"4b91206cd06e0f1566b9a5c64835e7539a86ca07a82804c036e6a63ad6f94d0e"} Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.609876 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b91206cd06e0f1566b9a5c64835e7539a86ca07a82804c036e6a63ad6f94d0e" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.609883 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.614608 4889 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.614729 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.615042 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.615513 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402" exitCode=0 Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.615583 4889 scope.go:117] "RemoveContainer" containerID="fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.615620 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.632306 4889 scope.go:117] "RemoveContainer" containerID="9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.644290 4889 scope.go:117] "RemoveContainer" containerID="43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.659063 4889 scope.go:117] "RemoveContainer" containerID="46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.671112 4889 scope.go:117] "RemoveContainer" containerID="c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.687521 4889 scope.go:117] "RemoveContainer" containerID="f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.710010 4889 scope.go:117] "RemoveContainer" containerID="fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.710407 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\": container with ID starting with fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c not found: ID does not exist" containerID="fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.710456 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c"} err="failed to get container status \"fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\": rpc error: code = NotFound desc = could not find container \"fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c\": container with ID starting with fab031156ed69fe5aa102ff507cf64738e06b36446901e13d2515b81ad512d4c not found: ID does not exist" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.710486 4889 scope.go:117] "RemoveContainer" containerID="9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.710766 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\": container with ID starting with 9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889 not found: ID does not exist" containerID="9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.710786 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889"} err="failed to get container status \"9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\": rpc error: code = NotFound desc = could not find container \"9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889\": container with ID starting with 9ebe42fb404e61ebddcf725b55889fcf3edf1712cb9ad78c711e08017cb75889 not found: ID does not exist" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.710798 4889 scope.go:117] "RemoveContainer" containerID="43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.711001 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\": container with ID starting with 43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416 not found: ID does not exist" containerID="43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.711025 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416"} err="failed to get container status \"43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\": rpc error: code = NotFound desc = could not find container \"43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416\": container with ID starting with 43f52c9bf7bcbb24588d44579ec34c3745f0dfe8e3a9ee7fec4a9bd8c29b3416 not found: ID does not exist" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.711042 4889 scope.go:117] "RemoveContainer" containerID="46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.711213 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\": container with ID starting with 46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113 not found: ID does not exist" containerID="46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.711231 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113"} err="failed to get container status \"46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\": rpc error: code = NotFound desc = could not find container \"46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113\": container with ID starting with 46d99f824da23c7949d5f4c5986b27954ae093a082270c506b6712cc1b98c113 not found: ID does not exist" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.711242 4889 scope.go:117] "RemoveContainer" containerID="c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.711398 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\": container with ID starting with c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402 not found: ID does not exist" containerID="c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.711415 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402"} err="failed to get container status \"c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\": rpc error: code = NotFound desc = could not find container \"c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402\": container with ID starting with c627074f1795f8f729efdac2271016c64aa7df70ceac91da093f02bdd7b84402 not found: ID does not exist" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.711426 4889 scope.go:117] "RemoveContainer" containerID="f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.711605 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\": container with ID starting with f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145 not found: ID does not exist" containerID="f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.711623 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145"} err="failed to get container status \"f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\": rpc error: code = NotFound desc = could not find container \"f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145\": container with ID starting with f1b5364b08b61438cc14b82c35f402c6df2c0d143e9125b80aea50e54e068145 not found: ID does not exist" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.729150 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.729226 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.729255 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.729573 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.729718 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.729746 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.830446 4889 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.830479 4889 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.830496 4889 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.932180 4889 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.932474 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.995347 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.995722 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.996055 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.996287 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.996485 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:01 crc kubenswrapper[4889]: I1128 06:52:01.996537 4889 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 06:52:01 crc kubenswrapper[4889]: E1128 06:52:01.996779 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Nov 28 06:52:02 crc kubenswrapper[4889]: E1128 06:52:02.197327 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Nov 28 06:52:02 crc kubenswrapper[4889]: E1128 06:52:02.598725 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Nov 28 06:52:03 crc kubenswrapper[4889]: I1128 06:52:03.339665 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 06:52:03 crc kubenswrapper[4889]: E1128 06:52:03.402058 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Nov 28 06:52:03 crc kubenswrapper[4889]: E1128 06:52:03.795449 4889 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:52:03 crc kubenswrapper[4889]: I1128 06:52:03.795858 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:52:03 crc kubenswrapper[4889]: W1128 06:52:03.811055 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f80433f7f8da8439232a2bebacca0d49d756a9156b4e0a4f0ee0237213eb689e WatchSource:0}: Error finding container f80433f7f8da8439232a2bebacca0d49d756a9156b4e0a4f0ee0237213eb689e: Status 404 returned error can't find the container with id f80433f7f8da8439232a2bebacca0d49d756a9156b4e0a4f0ee0237213eb689e Nov 28 06:52:03 crc kubenswrapper[4889]: E1128 06:52:03.813334 4889 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c19131792643d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:52:03.812934717 +0000 UTC m=+246.783168872,LastTimestamp:2025-11-28 06:52:03.812934717 +0000 UTC m=+246.783168872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:52:04 crc kubenswrapper[4889]: I1128 06:52:04.632833 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f"} Nov 28 06:52:04 crc kubenswrapper[4889]: I1128 06:52:04.633154 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f80433f7f8da8439232a2bebacca0d49d756a9156b4e0a4f0ee0237213eb689e"} Nov 28 06:52:04 crc kubenswrapper[4889]: I1128 06:52:04.633928 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:04 crc kubenswrapper[4889]: E1128 06:52:04.633952 4889 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:52:05 crc kubenswrapper[4889]: E1128 06:52:05.003179 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Nov 28 06:52:05 crc kubenswrapper[4889]: E1128 06:52:05.391663 4889 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" volumeName="registry-storage" Nov 28 06:52:07 crc kubenswrapper[4889]: I1128 06:52:07.336404 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:08 crc kubenswrapper[4889]: E1128 06:52:08.205039 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="6.4s" Nov 28 06:52:09 crc kubenswrapper[4889]: E1128 06:52:09.666836 4889 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c19131792643d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:52:03.812934717 +0000 UTC m=+246.783168872,LastTimestamp:2025-11-28 06:52:03.812934717 +0000 UTC m=+246.783168872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:52:12 crc kubenswrapper[4889]: I1128 06:52:12.023766 4889 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 06:52:12 crc kubenswrapper[4889]: I1128 06:52:12.024144 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 06:52:12 crc kubenswrapper[4889]: I1128 06:52:12.687307 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 06:52:12 crc kubenswrapper[4889]: I1128 06:52:12.687392 4889 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6" exitCode=1 Nov 28 06:52:12 crc kubenswrapper[4889]: I1128 06:52:12.687435 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6"} Nov 28 06:52:12 crc kubenswrapper[4889]: I1128 06:52:12.688175 4889 scope.go:117] "RemoveContainer" containerID="4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6" Nov 28 06:52:12 crc kubenswrapper[4889]: I1128 06:52:12.689117 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:12 crc kubenswrapper[4889]: I1128 06:52:12.689621 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.331527 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.332391 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.332926 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.344345 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.344378 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:13 crc kubenswrapper[4889]: E1128 06:52:13.344857 4889 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.345211 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:13 crc kubenswrapper[4889]: W1128 06:52:13.361221 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-f9823ccfa4f19665e43248f20cdd523a6bd01b476f77f150436e3174971d2d51 WatchSource:0}: Error finding container f9823ccfa4f19665e43248f20cdd523a6bd01b476f77f150436e3174971d2d51: Status 404 returned error can't find the container with id f9823ccfa4f19665e43248f20cdd523a6bd01b476f77f150436e3174971d2d51 Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.698903 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.699213 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e5be5a0b5016801958dc73d7c325787c91879521f24c07c2da5031e8344e7342"} Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.700997 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.701574 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.702689 4889 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b9dd5631611d3209d619e2e688a4f24b9f9851c6df0a2ed137b465ffd3e38649" exitCode=0 Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.702799 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b9dd5631611d3209d619e2e688a4f24b9f9851c6df0a2ed137b465ffd3e38649"} Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.703767 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f9823ccfa4f19665e43248f20cdd523a6bd01b476f77f150436e3174971d2d51"} Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.704136 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.704162 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:13 crc kubenswrapper[4889]: E1128 06:52:13.704742 4889 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.705232 4889 status_manager.go:851] "Failed to get status for pod" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:13 crc kubenswrapper[4889]: I1128 06:52:13.705670 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Nov 28 06:52:14 crc kubenswrapper[4889]: I1128 06:52:14.445340 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:14 crc kubenswrapper[4889]: I1128 06:52:14.710644 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2c715df35a32276c89b6377a711c8a7499379acc4c6752d9623342d977c8feb5"} Nov 28 06:52:14 crc kubenswrapper[4889]: I1128 06:52:14.710724 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4d1c16fd9c7c629c177472a41f855d5d02727e10a08da23afde54fe0184155e2"} Nov 28 06:52:14 crc kubenswrapper[4889]: I1128 06:52:14.710736 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cda434eb87130bfad1d2321532fa5308dc70d4c92db9f23fa57d2358f34eb584"} Nov 28 06:52:15 crc kubenswrapper[4889]: I1128 06:52:15.720675 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83abdd12682cb790245437c30286822e8cc676af1f37d5c75d8484ad6cb829c9"} Nov 28 06:52:15 crc kubenswrapper[4889]: I1128 06:52:15.721025 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:15 crc kubenswrapper[4889]: I1128 06:52:15.721039 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d7dea691dfd138fb7df912f800571744d78ce76f0b2e9d133bb1fedcbcc10283"} Nov 28 06:52:15 crc kubenswrapper[4889]: I1128 06:52:15.720976 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:15 crc kubenswrapper[4889]: I1128 06:52:15.721060 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:18 crc kubenswrapper[4889]: I1128 06:52:18.346428 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:18 crc kubenswrapper[4889]: I1128 06:52:18.346829 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:18 crc kubenswrapper[4889]: I1128 06:52:18.353776 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:18 crc kubenswrapper[4889]: I1128 06:52:18.576533 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:18 crc kubenswrapper[4889]: I1128 06:52:18.577055 4889 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 06:52:18 crc kubenswrapper[4889]: I1128 06:52:18.577168 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 06:52:20 crc kubenswrapper[4889]: I1128 06:52:20.737550 4889 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:20 crc kubenswrapper[4889]: I1128 06:52:20.802246 4889 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df619121-91e6-49e8-ae09-c9136df8797e" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.450640 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" podUID="40f4d399-8f92-4d2f-afa4-8f460aff4348" containerName="oauth-openshift" containerID="cri-o://88691fda2084d5406bf1eb28f5f09c999911a8110f6141d5587cc52fd65c1dee" gracePeriod=15 Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.756419 4889 generic.go:334] "Generic (PLEG): container finished" podID="40f4d399-8f92-4d2f-afa4-8f460aff4348" containerID="88691fda2084d5406bf1eb28f5f09c999911a8110f6141d5587cc52fd65c1dee" exitCode=0 Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.756921 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.756954 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.757130 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" event={"ID":"40f4d399-8f92-4d2f-afa4-8f460aff4348","Type":"ContainerDied","Data":"88691fda2084d5406bf1eb28f5f09c999911a8110f6141d5587cc52fd65c1dee"} Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.763771 4889 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df619121-91e6-49e8-ae09-c9136df8797e" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.764021 4889 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://cda434eb87130bfad1d2321532fa5308dc70d4c92db9f23fa57d2358f34eb584" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.764057 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.846129 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967000 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vp8q\" (UniqueName: \"kubernetes.io/projected/40f4d399-8f92-4d2f-afa4-8f460aff4348-kube-api-access-5vp8q\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967051 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-trusted-ca-bundle\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967092 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-policies\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967131 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-login\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967168 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-error\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967208 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-provider-selection\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967233 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-cliconfig\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967254 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-session\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967269 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-idp-0-file-data\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967290 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-service-ca\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967319 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-dir\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967394 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-router-certs\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967430 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-serving-cert\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967462 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-ocp-branding-template\") pod \"40f4d399-8f92-4d2f-afa4-8f460aff4348\" (UID: \"40f4d399-8f92-4d2f-afa4-8f460aff4348\") " Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.967627 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.968143 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.968511 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.968460 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.969346 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.972620 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.972768 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.972932 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f4d399-8f92-4d2f-afa4-8f460aff4348-kube-api-access-5vp8q" (OuterVolumeSpecName: "kube-api-access-5vp8q") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "kube-api-access-5vp8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.973141 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.973323 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.973593 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.973794 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.973964 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:21 crc kubenswrapper[4889]: I1128 06:52:21.978048 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "40f4d399-8f92-4d2f-afa4-8f460aff4348" (UID: "40f4d399-8f92-4d2f-afa4-8f460aff4348"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069026 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069069 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069084 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069100 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069116 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069129 4889 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069142 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069156 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069167 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069179 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vp8q\" (UniqueName: \"kubernetes.io/projected/40f4d399-8f92-4d2f-afa4-8f460aff4348-kube-api-access-5vp8q\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069190 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069200 4889 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40f4d399-8f92-4d2f-afa4-8f460aff4348-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069208 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.069219 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40f4d399-8f92-4d2f-afa4-8f460aff4348-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.767647 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" event={"ID":"40f4d399-8f92-4d2f-afa4-8f460aff4348","Type":"ContainerDied","Data":"b4f5114e815a11048a3877daf3b33b6147019eccc587426c8f9a80cb95420d06"} Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.768110 4889 scope.go:117] "RemoveContainer" containerID="88691fda2084d5406bf1eb28f5f09c999911a8110f6141d5587cc52fd65c1dee" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.767666 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mgdw9" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.767854 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.768266 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:22 crc kubenswrapper[4889]: I1128 06:52:22.773299 4889 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df619121-91e6-49e8-ae09-c9136df8797e" Nov 28 06:52:28 crc kubenswrapper[4889]: I1128 06:52:28.576493 4889 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 06:52:28 crc kubenswrapper[4889]: I1128 06:52:28.576788 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 06:52:30 crc kubenswrapper[4889]: I1128 06:52:30.134766 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 06:52:30 crc kubenswrapper[4889]: I1128 06:52:30.969855 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 06:52:31 crc kubenswrapper[4889]: I1128 06:52:31.114570 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 06:52:31 crc kubenswrapper[4889]: I1128 06:52:31.354165 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 06:52:32 crc kubenswrapper[4889]: I1128 06:52:32.272847 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 06:52:32 crc kubenswrapper[4889]: I1128 06:52:32.319571 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 06:52:32 crc kubenswrapper[4889]: I1128 06:52:32.401930 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 06:52:32 crc kubenswrapper[4889]: I1128 06:52:32.424381 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 06:52:32 crc kubenswrapper[4889]: I1128 06:52:32.487676 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 06:52:32 crc kubenswrapper[4889]: I1128 06:52:32.917848 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.085773 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.152148 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.255857 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.281355 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.300167 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.403168 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.515192 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.546860 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.556744 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.576405 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.587232 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.653917 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.669033 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.677426 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 06:52:33 crc kubenswrapper[4889]: I1128 06:52:33.681813 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.009499 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.009642 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.047379 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.081132 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.217056 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.287127 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.511990 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.523616 4889 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.529040 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mgdw9","openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.529111 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.529555 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.529598 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="027e3d13-3693-4e70-bd3a-e63d0faa96f1" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.532765 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.551389 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.551371957 podStartE2EDuration="14.551371957s" podCreationTimestamp="2025-11-28 06:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:52:34.548115428 +0000 UTC m=+277.518349583" watchObservedRunningTime="2025-11-28 06:52:34.551371957 +0000 UTC m=+277.521606112" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.606680 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.624842 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.635058 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.723519 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.754513 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.757614 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.805961 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.901075 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.947432 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpsrw"] Nov 28 06:52:34 crc kubenswrapper[4889]: I1128 06:52:34.947796 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qpsrw" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerName="registry-server" containerID="cri-o://0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54" gracePeriod=2 Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.130102 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.174117 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.174410 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.179922 4889 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.227142 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.228669 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.250365 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.264341 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.270213 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.347518 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f4d399-8f92-4d2f-afa4-8f460aff4348" path="/var/lib/kubelet/pods/40f4d399-8f92-4d2f-afa4-8f460aff4348/volumes" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.400686 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.457368 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.525058 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.578555 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbmmq\" (UniqueName: \"kubernetes.io/projected/6ee34603-f895-4e08-88d2-dc04ac976df1-kube-api-access-hbmmq\") pod \"6ee34603-f895-4e08-88d2-dc04ac976df1\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.579396 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-utilities" (OuterVolumeSpecName: "utilities") pod "6ee34603-f895-4e08-88d2-dc04ac976df1" (UID: "6ee34603-f895-4e08-88d2-dc04ac976df1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.578700 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-utilities\") pod \"6ee34603-f895-4e08-88d2-dc04ac976df1\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.579540 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-catalog-content\") pod \"6ee34603-f895-4e08-88d2-dc04ac976df1\" (UID: \"6ee34603-f895-4e08-88d2-dc04ac976df1\") " Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.582094 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.585080 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee34603-f895-4e08-88d2-dc04ac976df1-kube-api-access-hbmmq" (OuterVolumeSpecName: "kube-api-access-hbmmq") pod "6ee34603-f895-4e08-88d2-dc04ac976df1" (UID: "6ee34603-f895-4e08-88d2-dc04ac976df1"). InnerVolumeSpecName "kube-api-access-hbmmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.629732 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.645092 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.682514 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbmmq\" (UniqueName: \"kubernetes.io/projected/6ee34603-f895-4e08-88d2-dc04ac976df1-kube-api-access-hbmmq\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.691020 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.697360 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.711418 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ee34603-f895-4e08-88d2-dc04ac976df1" (UID: "6ee34603-f895-4e08-88d2-dc04ac976df1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.746282 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.746422 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.771160 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.783735 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee34603-f895-4e08-88d2-dc04ac976df1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.828995 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.851692 4889 generic.go:334] "Generic (PLEG): container finished" podID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerID="0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54" exitCode=0 Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.851745 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpsrw" event={"ID":"6ee34603-f895-4e08-88d2-dc04ac976df1","Type":"ContainerDied","Data":"0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54"} Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.851857 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpsrw" event={"ID":"6ee34603-f895-4e08-88d2-dc04ac976df1","Type":"ContainerDied","Data":"7cef2bf261514310463e3b436a8ee2dba5d401800a12471112217302c05a5148"} Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.851899 4889 scope.go:117] "RemoveContainer" containerID="0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.851928 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpsrw" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.858170 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.868233 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.874602 4889 scope.go:117] "RemoveContainer" containerID="52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.893828 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpsrw"] Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.896078 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qpsrw"] Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.899135 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.906485 4889 scope.go:117] "RemoveContainer" containerID="4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.926286 4889 scope.go:117] "RemoveContainer" containerID="0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54" Nov 28 06:52:35 crc kubenswrapper[4889]: E1128 06:52:35.929223 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54\": container with ID starting with 0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54 not found: ID does not exist" containerID="0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.929314 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54"} err="failed to get container status \"0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54\": rpc error: code = NotFound desc = could not find container \"0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54\": container with ID starting with 0701443b42d2449f33544c3ed3dac8b0c05deb26470c55f20ae0abcf52cc7d54 not found: ID does not exist" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.929372 4889 scope.go:117] "RemoveContainer" containerID="52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619" Nov 28 06:52:35 crc kubenswrapper[4889]: E1128 06:52:35.929883 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619\": container with ID starting with 52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619 not found: ID does not exist" containerID="52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.929944 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619"} err="failed to get container status \"52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619\": rpc error: code = NotFound desc = could not find container \"52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619\": container with ID starting with 52035aa00a51bde0356c28b07fcc42209850b3f1e6bdbf403206a4ab410d5619 not found: ID does not exist" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.929984 4889 scope.go:117] "RemoveContainer" containerID="4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f" Nov 28 06:52:35 crc kubenswrapper[4889]: E1128 06:52:35.930597 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f\": container with ID starting with 4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f not found: ID does not exist" containerID="4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.930643 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f"} err="failed to get container status \"4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f\": rpc error: code = NotFound desc = could not find container \"4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f\": container with ID starting with 4028c65190551720a8fd635e30bc234eed8f5b6fb117a087c3c079b71542ae3f not found: ID does not exist" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.941097 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.942963 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 06:52:35 crc kubenswrapper[4889]: I1128 06:52:35.993989 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.016149 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.057750 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.093179 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.195232 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.429524 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.450604 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.484376 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.602877 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.651563 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.656641 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.947650 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 06:52:36 crc kubenswrapper[4889]: I1128 06:52:36.971134 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.088794 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.129856 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.337618 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" path="/var/lib/kubelet/pods/6ee34603-f895-4e08-88d2-dc04ac976df1/volumes" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.414465 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.474093 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.495816 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.569202 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.655578 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.683540 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.843904 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.889556 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.891415 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.907569 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.918358 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.937179 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 06:52:37 crc kubenswrapper[4889]: I1128 06:52:37.952366 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.027398 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.044609 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.059421 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.078138 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.081041 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.100242 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.171647 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.412317 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.419787 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.429249 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.484465 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.515286 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.516590 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.542563 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.576458 4889 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.576600 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.576770 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.578318 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e5be5a0b5016801958dc73d7c325787c91879521f24c07c2da5031e8344e7342"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.578571 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e5be5a0b5016801958dc73d7c325787c91879521f24c07c2da5031e8344e7342" gracePeriod=30 Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.587482 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.610776 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.618324 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.668557 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.694876 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.703357 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.808251 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.865173 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.877813 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.886510 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 06:52:38 crc kubenswrapper[4889]: I1128 06:52:38.995613 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.018362 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.139546 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.145223 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.149117 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.290379 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.322648 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.336077 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.355848 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.364320 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.399201 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.417306 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.485408 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.545317 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.553348 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.740764 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.766583 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.806613 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 06:52:39 crc kubenswrapper[4889]: I1128 06:52:39.924185 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.013524 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.220160 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.222991 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.405633 4889 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.428041 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.479473 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.486607 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.596907 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.713014 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.752415 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.773095 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.800129 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.811694 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.871137 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.948608 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 06:52:40 crc kubenswrapper[4889]: I1128 06:52:40.949520 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.025865 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.066116 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.128215 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.154100 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.221865 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.273206 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.344311 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.345884 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.371288 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.391537 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.474037 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.482995 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.576802 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.679263 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.715456 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.823018 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.861640 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.884550 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.907903 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:52:41 crc kubenswrapper[4889]: I1128 06:52:41.958188 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.011960 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.052893 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.116417 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.174484 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.188581 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.430738 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.442144 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.444634 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.736071 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.767502 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.803936 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.830083 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.878456 4889 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.903000 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 06:52:42 crc kubenswrapper[4889]: I1128 06:52:42.959895 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.012661 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.020281 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.022434 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.156505 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.172872 4889 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.260091 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.280751 4889 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.281409 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f" gracePeriod=5 Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.321474 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.368471 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.395527 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.408876 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.412413 4889 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.414225 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.545927 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.571583 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.591124 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.600663 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.651612 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.711226 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.715428 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.730699 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.782227 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.788189 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.885524 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.915964 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.918314 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4889]: I1128 06:52:43.930905 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.250656 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.281182 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.281789 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.586983 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.721079 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.730885 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.759732 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.760494 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.842560 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 06:52:44 crc kubenswrapper[4889]: I1128 06:52:44.876881 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 06:52:45 crc kubenswrapper[4889]: I1128 06:52:45.061373 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 06:52:45 crc kubenswrapper[4889]: I1128 06:52:45.483334 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 06:52:45 crc kubenswrapper[4889]: I1128 06:52:45.624553 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 06:52:45 crc kubenswrapper[4889]: I1128 06:52:45.706015 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:52:45 crc kubenswrapper[4889]: I1128 06:52:45.897743 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 06:52:46 crc kubenswrapper[4889]: I1128 06:52:46.317455 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 06:52:46 crc kubenswrapper[4889]: I1128 06:52:46.494069 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4889]: I1128 06:52:46.539075 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 06:52:46 crc kubenswrapper[4889]: I1128 06:52:46.562162 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 06:52:46 crc kubenswrapper[4889]: I1128 06:52:46.877905 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 06:52:47 crc kubenswrapper[4889]: I1128 06:52:47.003774 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 06:52:47 crc kubenswrapper[4889]: I1128 06:52:47.232016 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 06:52:47 crc kubenswrapper[4889]: I1128 06:52:47.533822 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 06:52:47 crc kubenswrapper[4889]: I1128 06:52:47.725145 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.861530 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.861638 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.898847 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.898958 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.898985 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.899012 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.899027 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.899036 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.899097 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.899204 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.899261 4889 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.899275 4889 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.899282 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.907675 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.957424 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.957476 4889 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f" exitCode=137 Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.957526 4889 scope.go:117] "RemoveContainer" containerID="10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.957642 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.972682 4889 scope.go:117] "RemoveContainer" containerID="10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f" Nov 28 06:52:48 crc kubenswrapper[4889]: E1128 06:52:48.973259 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f\": container with ID starting with 10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f not found: ID does not exist" containerID="10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f" Nov 28 06:52:48 crc kubenswrapper[4889]: I1128 06:52:48.973305 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f"} err="failed to get container status \"10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f\": rpc error: code = NotFound desc = could not find container \"10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f\": container with ID starting with 10f09ce91beeb227a50bd92c79fca028f5421a946d993ca821b637361d936c4f not found: ID does not exist" Nov 28 06:52:49 crc kubenswrapper[4889]: I1128 06:52:49.000400 4889 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:49 crc kubenswrapper[4889]: I1128 06:52:49.000449 4889 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:49 crc kubenswrapper[4889]: I1128 06:52:49.000459 4889 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:49 crc kubenswrapper[4889]: I1128 06:52:49.342911 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482287 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6644f974c8-2s28s"] Nov 28 06:52:51 crc kubenswrapper[4889]: E1128 06:52:51.482524 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerName="registry-server" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482536 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerName="registry-server" Nov 28 06:52:51 crc kubenswrapper[4889]: E1128 06:52:51.482546 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" containerName="installer" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482552 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" containerName="installer" Nov 28 06:52:51 crc kubenswrapper[4889]: E1128 06:52:51.482560 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerName="extract-content" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482566 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerName="extract-content" Nov 28 06:52:51 crc kubenswrapper[4889]: E1128 06:52:51.482581 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f4d399-8f92-4d2f-afa4-8f460aff4348" containerName="oauth-openshift" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482587 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f4d399-8f92-4d2f-afa4-8f460aff4348" containerName="oauth-openshift" Nov 28 06:52:51 crc kubenswrapper[4889]: E1128 06:52:51.482599 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482604 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:52:51 crc kubenswrapper[4889]: E1128 06:52:51.482613 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerName="extract-utilities" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482619 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerName="extract-utilities" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482739 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b65dcd-db7e-4ce7-b6b8-8709d23b2f4c" containerName="installer" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482758 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee34603-f895-4e08-88d2-dc04ac976df1" containerName="registry-server" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482765 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f4d399-8f92-4d2f-afa4-8f460aff4348" containerName="oauth-openshift" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.482772 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.483175 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.486414 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.486803 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.487639 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.487956 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.489227 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.489363 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.489746 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.490162 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.490519 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.491723 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.491738 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.493230 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.502915 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.509746 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6644f974c8-2s28s"] Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.510230 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.514757 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532598 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-session\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532658 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-audit-policies\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532694 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532746 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/811ee8f7-5e4d-4441-848a-1f78a2ec3384-audit-dir\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532807 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-login\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532853 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532881 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-router-certs\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532910 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532938 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.532995 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szfn\" (UniqueName: \"kubernetes.io/projected/811ee8f7-5e4d-4441-848a-1f78a2ec3384-kube-api-access-2szfn\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.533031 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.533053 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-service-ca\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.533072 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-error\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.533093 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.634828 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636019 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-service-ca\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636082 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-error\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636135 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636196 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-session\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636230 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-audit-policies\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636266 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636330 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/811ee8f7-5e4d-4441-848a-1f78a2ec3384-audit-dir\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636353 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-login\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636405 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636453 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636477 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-router-certs\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636521 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636519 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/811ee8f7-5e4d-4441-848a-1f78a2ec3384-audit-dir\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636629 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szfn\" (UniqueName: \"kubernetes.io/projected/811ee8f7-5e4d-4441-848a-1f78a2ec3384-kube-api-access-2szfn\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.636994 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-service-ca\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.637965 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.638160 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-audit-policies\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.640898 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.642444 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.642441 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.642491 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-login\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.642817 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-template-error\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.643124 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.643256 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.643643 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-router-certs\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.644336 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/811ee8f7-5e4d-4441-848a-1f78a2ec3384-v4-0-config-system-session\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.656138 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szfn\" (UniqueName: \"kubernetes.io/projected/811ee8f7-5e4d-4441-848a-1f78a2ec3384-kube-api-access-2szfn\") pod \"oauth-openshift-6644f974c8-2s28s\" (UID: \"811ee8f7-5e4d-4441-848a-1f78a2ec3384\") " pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:51 crc kubenswrapper[4889]: I1128 06:52:51.801541 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:52 crc kubenswrapper[4889]: I1128 06:52:52.087145 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6644f974c8-2s28s"] Nov 28 06:52:53 crc kubenswrapper[4889]: I1128 06:52:53.003915 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" event={"ID":"811ee8f7-5e4d-4441-848a-1f78a2ec3384","Type":"ContainerStarted","Data":"30b7cae91b0d980b4149373262de4e4e0d0e37460bcedc55740bd3112d053040"} Nov 28 06:52:53 crc kubenswrapper[4889]: I1128 06:52:53.004405 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" event={"ID":"811ee8f7-5e4d-4441-848a-1f78a2ec3384","Type":"ContainerStarted","Data":"a08019077d9368297d3a17e6af9d20cf9ebf6a83afb5476c92097ffc0aed14d5"} Nov 28 06:52:53 crc kubenswrapper[4889]: I1128 06:52:53.004440 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:53 crc kubenswrapper[4889]: I1128 06:52:53.014315 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" Nov 28 06:52:53 crc kubenswrapper[4889]: I1128 06:52:53.029850 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6644f974c8-2s28s" podStartSLOduration=57.029833424 podStartE2EDuration="57.029833424s" podCreationTimestamp="2025-11-28 06:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:52:53.025858437 +0000 UTC m=+295.996092602" watchObservedRunningTime="2025-11-28 06:52:53.029833424 +0000 UTC m=+296.000067599" Nov 28 06:52:57 crc kubenswrapper[4889]: I1128 06:52:57.187416 4889 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 28 06:53:09 crc kubenswrapper[4889]: I1128 06:53:09.114995 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 28 06:53:09 crc kubenswrapper[4889]: I1128 06:53:09.119407 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 06:53:09 crc kubenswrapper[4889]: I1128 06:53:09.119737 4889 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e5be5a0b5016801958dc73d7c325787c91879521f24c07c2da5031e8344e7342" exitCode=137 Nov 28 06:53:09 crc kubenswrapper[4889]: I1128 06:53:09.119828 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e5be5a0b5016801958dc73d7c325787c91879521f24c07c2da5031e8344e7342"} Nov 28 06:53:09 crc kubenswrapper[4889]: I1128 06:53:09.120091 4889 scope.go:117] "RemoveContainer" containerID="4b5c7e5efce4f046f0c27499e2b7be111033f64d8c52ec3812af651e405e7ec6" Nov 28 06:53:10 crc kubenswrapper[4889]: I1128 06:53:10.127676 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 28 06:53:10 crc kubenswrapper[4889]: I1128 06:53:10.129338 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bcf0e57ce0f643c3f257b00b91eb0038c90a382d57769a5d83a2d6bd13b0571f"} Nov 28 06:53:14 crc kubenswrapper[4889]: I1128 06:53:14.445341 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:53:18 crc kubenswrapper[4889]: I1128 06:53:18.576327 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:53:18 crc kubenswrapper[4889]: I1128 06:53:18.584379 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:53:19 crc kubenswrapper[4889]: I1128 06:53:19.182500 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.635172 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5td62"] Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.636525 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.658128 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5td62"] Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.691840 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc"] Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.692201 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" podUID="8502f12d-fa3b-441f-b96d-e33d236f8131" containerName="route-controller-manager" containerID="cri-o://3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742" gracePeriod=30 Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.694972 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4kcw5"] Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.695238 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" podUID="32d7045a-59bd-4637-9365-be7ca63fab06" containerName="controller-manager" containerID="cri-o://42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a" gracePeriod=30 Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.719339 4889 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-sl2sc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.719417 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" podUID="8502f12d-fa3b-441f-b96d-e33d236f8131" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.728402 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dba21a8f-9c0b-4618-a852-09895304765b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.728448 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dba21a8f-9c0b-4618-a852-09895304765b-registry-certificates\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.728481 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dba21a8f-9c0b-4618-a852-09895304765b-trusted-ca\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.728508 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-registry-tls\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.728541 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.728733 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dba21a8f-9c0b-4618-a852-09895304765b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.728784 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-bound-sa-token\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.728909 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dxl\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-kube-api-access-m2dxl\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.765276 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.830532 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dba21a8f-9c0b-4618-a852-09895304765b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.830584 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-bound-sa-token\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.830624 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2dxl\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-kube-api-access-m2dxl\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.830643 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dba21a8f-9c0b-4618-a852-09895304765b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.830661 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dba21a8f-9c0b-4618-a852-09895304765b-registry-certificates\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.831213 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dba21a8f-9c0b-4618-a852-09895304765b-trusted-ca\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.831241 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-registry-tls\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.831166 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dba21a8f-9c0b-4618-a852-09895304765b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.832282 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dba21a8f-9c0b-4618-a852-09895304765b-registry-certificates\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.833237 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dba21a8f-9c0b-4618-a852-09895304765b-trusted-ca\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.838575 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dba21a8f-9c0b-4618-a852-09895304765b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.838614 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-registry-tls\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.854589 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2dxl\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-kube-api-access-m2dxl\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.859014 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba21a8f-9c0b-4618-a852-09895304765b-bound-sa-token\") pod \"image-registry-66df7c8f76-5td62\" (UID: \"dba21a8f-9c0b-4618-a852-09895304765b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:25 crc kubenswrapper[4889]: I1128 06:53:25.953255 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.112459 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.166904 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.175755 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76d8974cd6-2qlmh"] Nov 28 06:53:26 crc kubenswrapper[4889]: E1128 06:53:26.176026 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d7045a-59bd-4637-9365-be7ca63fab06" containerName="controller-manager" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.176038 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d7045a-59bd-4637-9365-be7ca63fab06" containerName="controller-manager" Nov 28 06:53:26 crc kubenswrapper[4889]: E1128 06:53:26.176049 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8502f12d-fa3b-441f-b96d-e33d236f8131" containerName="route-controller-manager" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.176058 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8502f12d-fa3b-441f-b96d-e33d236f8131" containerName="route-controller-manager" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.176144 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8502f12d-fa3b-441f-b96d-e33d236f8131" containerName="route-controller-manager" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.176160 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d7045a-59bd-4637-9365-be7ca63fab06" containerName="controller-manager" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.176551 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.189913 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76d8974cd6-2qlmh"] Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.224669 4889 generic.go:334] "Generic (PLEG): container finished" podID="32d7045a-59bd-4637-9365-be7ca63fab06" containerID="42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a" exitCode=0 Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.224752 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" event={"ID":"32d7045a-59bd-4637-9365-be7ca63fab06","Type":"ContainerDied","Data":"42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a"} Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.224783 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" event={"ID":"32d7045a-59bd-4637-9365-be7ca63fab06","Type":"ContainerDied","Data":"aba17c347eb645aadbc6b35b47af72362e2e257f680bac1ba49e9acf85af23c0"} Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.224801 4889 scope.go:117] "RemoveContainer" containerID="42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.224914 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4kcw5" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.233447 4889 generic.go:334] "Generic (PLEG): container finished" podID="8502f12d-fa3b-441f-b96d-e33d236f8131" containerID="3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742" exitCode=0 Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.233526 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" event={"ID":"8502f12d-fa3b-441f-b96d-e33d236f8131","Type":"ContainerDied","Data":"3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742"} Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.233611 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" event={"ID":"8502f12d-fa3b-441f-b96d-e33d236f8131","Type":"ContainerDied","Data":"03488da303b76d2e1b5e980a6675596b1a10d0b2d4e61d2cafba412894d65809"} Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.233755 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.248346 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-client-ca\") pod \"32d7045a-59bd-4637-9365-be7ca63fab06\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.248425 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qvll\" (UniqueName: \"kubernetes.io/projected/32d7045a-59bd-4637-9365-be7ca63fab06-kube-api-access-8qvll\") pod \"32d7045a-59bd-4637-9365-be7ca63fab06\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.248458 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32d7045a-59bd-4637-9365-be7ca63fab06-serving-cert\") pod \"32d7045a-59bd-4637-9365-be7ca63fab06\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.248537 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-proxy-ca-bundles\") pod \"32d7045a-59bd-4637-9365-be7ca63fab06\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.248589 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-config\") pod \"32d7045a-59bd-4637-9365-be7ca63fab06\" (UID: \"32d7045a-59bd-4637-9365-be7ca63fab06\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.249382 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-client-ca" (OuterVolumeSpecName: "client-ca") pod "32d7045a-59bd-4637-9365-be7ca63fab06" (UID: "32d7045a-59bd-4637-9365-be7ca63fab06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.249804 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "32d7045a-59bd-4637-9365-be7ca63fab06" (UID: "32d7045a-59bd-4637-9365-be7ca63fab06"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.250361 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-config" (OuterVolumeSpecName: "config") pod "32d7045a-59bd-4637-9365-be7ca63fab06" (UID: "32d7045a-59bd-4637-9365-be7ca63fab06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.251409 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5td62"] Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.258538 4889 scope.go:117] "RemoveContainer" containerID="42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.258837 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d7045a-59bd-4637-9365-be7ca63fab06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32d7045a-59bd-4637-9365-be7ca63fab06" (UID: "32d7045a-59bd-4637-9365-be7ca63fab06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: E1128 06:53:26.259114 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a\": container with ID starting with 42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a not found: ID does not exist" containerID="42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.259225 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a"} err="failed to get container status \"42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a\": rpc error: code = NotFound desc = could not find container \"42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a\": container with ID starting with 42ab8f48b86eb86a8f954d4932e564e4927795a105092420732b6583ea1b088a not found: ID does not exist" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.259345 4889 scope.go:117] "RemoveContainer" containerID="3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.259241 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d7045a-59bd-4637-9365-be7ca63fab06-kube-api-access-8qvll" (OuterVolumeSpecName: "kube-api-access-8qvll") pod "32d7045a-59bd-4637-9365-be7ca63fab06" (UID: "32d7045a-59bd-4637-9365-be7ca63fab06"). InnerVolumeSpecName "kube-api-access-8qvll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.280232 4889 scope.go:117] "RemoveContainer" containerID="3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742" Nov 28 06:53:26 crc kubenswrapper[4889]: E1128 06:53:26.280546 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742\": container with ID starting with 3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742 not found: ID does not exist" containerID="3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.280602 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742"} err="failed to get container status \"3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742\": rpc error: code = NotFound desc = could not find container \"3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742\": container with ID starting with 3ce66b75f3ceef9827b3fed5f0bdab505f27014a0f7e86775f40f102cacdc742 not found: ID does not exist" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.349699 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-client-ca\") pod \"8502f12d-fa3b-441f-b96d-e33d236f8131\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.349804 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8502f12d-fa3b-441f-b96d-e33d236f8131-serving-cert\") pod \"8502f12d-fa3b-441f-b96d-e33d236f8131\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.349861 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-config\") pod \"8502f12d-fa3b-441f-b96d-e33d236f8131\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.349888 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg2f6\" (UniqueName: \"kubernetes.io/projected/8502f12d-fa3b-441f-b96d-e33d236f8131-kube-api-access-vg2f6\") pod \"8502f12d-fa3b-441f-b96d-e33d236f8131\" (UID: \"8502f12d-fa3b-441f-b96d-e33d236f8131\") " Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350091 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-serving-cert\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350132 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-proxy-ca-bundles\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350177 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-config\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350238 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-client-ca\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350269 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkd8z\" (UniqueName: \"kubernetes.io/projected/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-kube-api-access-tkd8z\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350336 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350352 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qvll\" (UniqueName: \"kubernetes.io/projected/32d7045a-59bd-4637-9365-be7ca63fab06-kube-api-access-8qvll\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350368 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32d7045a-59bd-4637-9365-be7ca63fab06-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350380 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350394 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d7045a-59bd-4637-9365-be7ca63fab06-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350649 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-client-ca" (OuterVolumeSpecName: "client-ca") pod "8502f12d-fa3b-441f-b96d-e33d236f8131" (UID: "8502f12d-fa3b-441f-b96d-e33d236f8131"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.350752 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-config" (OuterVolumeSpecName: "config") pod "8502f12d-fa3b-441f-b96d-e33d236f8131" (UID: "8502f12d-fa3b-441f-b96d-e33d236f8131"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.352991 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8502f12d-fa3b-441f-b96d-e33d236f8131-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8502f12d-fa3b-441f-b96d-e33d236f8131" (UID: "8502f12d-fa3b-441f-b96d-e33d236f8131"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.353254 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8502f12d-fa3b-441f-b96d-e33d236f8131-kube-api-access-vg2f6" (OuterVolumeSpecName: "kube-api-access-vg2f6") pod "8502f12d-fa3b-441f-b96d-e33d236f8131" (UID: "8502f12d-fa3b-441f-b96d-e33d236f8131"). InnerVolumeSpecName "kube-api-access-vg2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451529 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-client-ca\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451591 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkd8z\" (UniqueName: \"kubernetes.io/projected/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-kube-api-access-tkd8z\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451644 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-serving-cert\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451677 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-proxy-ca-bundles\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451741 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-config\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451819 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451831 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8502f12d-fa3b-441f-b96d-e33d236f8131-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451844 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8502f12d-fa3b-441f-b96d-e33d236f8131-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.451852 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg2f6\" (UniqueName: \"kubernetes.io/projected/8502f12d-fa3b-441f-b96d-e33d236f8131-kube-api-access-vg2f6\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.453029 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-config\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.453553 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-client-ca\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.455397 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-proxy-ca-bundles\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.457493 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-serving-cert\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.505276 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkd8z\" (UniqueName: \"kubernetes.io/projected/ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc-kube-api-access-tkd8z\") pod \"controller-manager-76d8974cd6-2qlmh\" (UID: \"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc\") " pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.510299 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.554460 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4kcw5"] Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.567581 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4kcw5"] Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.584421 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc"] Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.592688 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sl2sc"] Nov 28 06:53:26 crc kubenswrapper[4889]: I1128 06:53:26.715293 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76d8974cd6-2qlmh"] Nov 28 06:53:26 crc kubenswrapper[4889]: W1128 06:53:26.721447 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff420073_dc5e_4d9f_bb26_7f3dbd8bbabc.slice/crio-2edaf3fb1abebf18243293bd68c853f67700b38d3d2d3a76d65fb7dfe94dc5ed WatchSource:0}: Error finding container 2edaf3fb1abebf18243293bd68c853f67700b38d3d2d3a76d65fb7dfe94dc5ed: Status 404 returned error can't find the container with id 2edaf3fb1abebf18243293bd68c853f67700b38d3d2d3a76d65fb7dfe94dc5ed Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.242601 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" event={"ID":"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc","Type":"ContainerStarted","Data":"f8a81bbcfa05471893b956275415824cd9d3ba91097e4399ebba0e3f0f695c49"} Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.243012 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.243025 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" event={"ID":"ff420073-dc5e-4d9f-bb26-7f3dbd8bbabc","Type":"ContainerStarted","Data":"2edaf3fb1abebf18243293bd68c853f67700b38d3d2d3a76d65fb7dfe94dc5ed"} Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.247238 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5td62" event={"ID":"dba21a8f-9c0b-4618-a852-09895304765b","Type":"ContainerStarted","Data":"7f1d375eb4893aa1df5be97a444b3718d070af86b85d246e7aa872f1469ab4ac"} Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.247297 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5td62" event={"ID":"dba21a8f-9c0b-4618-a852-09895304765b","Type":"ContainerStarted","Data":"50ee5e965bd072c987a9a824916ae994c70db3b345ab083cdddf7a17b51c4311"} Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.247400 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.249322 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.263870 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76d8974cd6-2qlmh" podStartSLOduration=1.263844417 podStartE2EDuration="1.263844417s" podCreationTimestamp="2025-11-28 06:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:53:27.260724221 +0000 UTC m=+330.230958386" watchObservedRunningTime="2025-11-28 06:53:27.263844417 +0000 UTC m=+330.234078572" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.338730 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5td62" podStartSLOduration=2.338697062 podStartE2EDuration="2.338697062s" podCreationTimestamp="2025-11-28 06:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:53:27.335026893 +0000 UTC m=+330.305261048" watchObservedRunningTime="2025-11-28 06:53:27.338697062 +0000 UTC m=+330.308931217" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.342133 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d7045a-59bd-4637-9365-be7ca63fab06" path="/var/lib/kubelet/pods/32d7045a-59bd-4637-9365-be7ca63fab06/volumes" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.342823 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8502f12d-fa3b-441f-b96d-e33d236f8131" path="/var/lib/kubelet/pods/8502f12d-fa3b-441f-b96d-e33d236f8131/volumes" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.506123 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh"] Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.506801 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.509273 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.509544 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.510289 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.510646 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.511030 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.511416 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.517370 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh"] Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.570551 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5afe5ce-1002-414f-a5bc-69a03598189f-serving-cert\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.570653 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5afe5ce-1002-414f-a5bc-69a03598189f-config\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.570731 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5afe5ce-1002-414f-a5bc-69a03598189f-client-ca\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.570771 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528xb\" (UniqueName: \"kubernetes.io/projected/a5afe5ce-1002-414f-a5bc-69a03598189f-kube-api-access-528xb\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.672625 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5afe5ce-1002-414f-a5bc-69a03598189f-client-ca\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.672692 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-528xb\" (UniqueName: \"kubernetes.io/projected/a5afe5ce-1002-414f-a5bc-69a03598189f-kube-api-access-528xb\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.672796 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5afe5ce-1002-414f-a5bc-69a03598189f-serving-cert\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.672833 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5afe5ce-1002-414f-a5bc-69a03598189f-config\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.674268 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5afe5ce-1002-414f-a5bc-69a03598189f-client-ca\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.674463 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5afe5ce-1002-414f-a5bc-69a03598189f-config\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.682581 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5afe5ce-1002-414f-a5bc-69a03598189f-serving-cert\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.690552 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-528xb\" (UniqueName: \"kubernetes.io/projected/a5afe5ce-1002-414f-a5bc-69a03598189f-kube-api-access-528xb\") pod \"route-controller-manager-5485c7fbd4-52vqh\" (UID: \"a5afe5ce-1002-414f-a5bc-69a03598189f\") " pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:27 crc kubenswrapper[4889]: I1128 06:53:27.823801 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:28 crc kubenswrapper[4889]: I1128 06:53:28.065834 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh"] Nov 28 06:53:28 crc kubenswrapper[4889]: W1128 06:53:28.069000 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5afe5ce_1002_414f_a5bc_69a03598189f.slice/crio-a45543f7ef4d57a75aeb71a2b02db185b72e9de5ed19c74af8651e0a45979fb9 WatchSource:0}: Error finding container a45543f7ef4d57a75aeb71a2b02db185b72e9de5ed19c74af8651e0a45979fb9: Status 404 returned error can't find the container with id a45543f7ef4d57a75aeb71a2b02db185b72e9de5ed19c74af8651e0a45979fb9 Nov 28 06:53:28 crc kubenswrapper[4889]: I1128 06:53:28.256331 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" event={"ID":"a5afe5ce-1002-414f-a5bc-69a03598189f","Type":"ContainerStarted","Data":"a8007e8d95a70abc0a2863f38cb88eeb981c508495de2ccaf619d5a5439d14cd"} Nov 28 06:53:28 crc kubenswrapper[4889]: I1128 06:53:28.256925 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" event={"ID":"a5afe5ce-1002-414f-a5bc-69a03598189f","Type":"ContainerStarted","Data":"a45543f7ef4d57a75aeb71a2b02db185b72e9de5ed19c74af8651e0a45979fb9"} Nov 28 06:53:28 crc kubenswrapper[4889]: I1128 06:53:28.281151 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" podStartSLOduration=2.281129128 podStartE2EDuration="2.281129128s" podCreationTimestamp="2025-11-28 06:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:53:28.276931296 +0000 UTC m=+331.247165451" watchObservedRunningTime="2025-11-28 06:53:28.281129128 +0000 UTC m=+331.251363283" Nov 28 06:53:29 crc kubenswrapper[4889]: I1128 06:53:29.262213 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:29 crc kubenswrapper[4889]: I1128 06:53:29.269594 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5485c7fbd4-52vqh" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.538990 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhttl"] Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.539861 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vhttl" podUID="e1c17912-a129-45b4-b833-04493886c507" containerName="registry-server" containerID="cri-o://999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e" gracePeriod=30 Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.548347 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxh5k"] Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.549085 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxh5k" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerName="registry-server" containerID="cri-o://64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90" gracePeriod=30 Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.560901 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2tcxh"] Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.561237 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" podUID="56d26fb0-3c51-4131-ab05-3e0e407bd9dd" containerName="marketplace-operator" containerID="cri-o://1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad" gracePeriod=30 Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.581167 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mw2sc"] Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.581577 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mw2sc" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerName="registry-server" containerID="cri-o://f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299" gracePeriod=30 Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.587135 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8f887"] Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.587515 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8f887" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="registry-server" containerID="cri-o://3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783" gracePeriod=30 Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.595291 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ktxlv"] Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.596083 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.599147 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ktxlv"] Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.710416 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93f9f385-e809-4bca-b770-f6967eaa5578-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.710671 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93f9f385-e809-4bca-b770-f6967eaa5578-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.710778 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksntv\" (UniqueName: \"kubernetes.io/projected/93f9f385-e809-4bca-b770-f6967eaa5578-kube-api-access-ksntv\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.812091 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksntv\" (UniqueName: \"kubernetes.io/projected/93f9f385-e809-4bca-b770-f6967eaa5578-kube-api-access-ksntv\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.812171 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93f9f385-e809-4bca-b770-f6967eaa5578-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.812197 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93f9f385-e809-4bca-b770-f6967eaa5578-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.813766 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93f9f385-e809-4bca-b770-f6967eaa5578-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.818948 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93f9f385-e809-4bca-b770-f6967eaa5578-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: I1128 06:53:35.829578 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksntv\" (UniqueName: \"kubernetes.io/projected/93f9f385-e809-4bca-b770-f6967eaa5578-kube-api-access-ksntv\") pod \"marketplace-operator-79b997595-ktxlv\" (UID: \"93f9f385-e809-4bca-b770-f6967eaa5578\") " pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:35 crc kubenswrapper[4889]: E1128 06:53:35.890085 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783 is running failed: container process not found" containerID="3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 06:53:35 crc kubenswrapper[4889]: E1128 06:53:35.890681 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783 is running failed: container process not found" containerID="3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 06:53:35 crc kubenswrapper[4889]: E1128 06:53:35.891242 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783 is running failed: container process not found" containerID="3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 06:53:35 crc kubenswrapper[4889]: E1128 06:53:35.891277 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8f887" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="registry-server" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.020248 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.028172 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.115995 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4kss\" (UniqueName: \"kubernetes.io/projected/e1c17912-a129-45b4-b833-04493886c507-kube-api-access-l4kss\") pod \"e1c17912-a129-45b4-b833-04493886c507\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.116073 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-catalog-content\") pod \"e1c17912-a129-45b4-b833-04493886c507\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.116111 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-utilities\") pod \"e1c17912-a129-45b4-b833-04493886c507\" (UID: \"e1c17912-a129-45b4-b833-04493886c507\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.117798 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-utilities" (OuterVolumeSpecName: "utilities") pod "e1c17912-a129-45b4-b833-04493886c507" (UID: "e1c17912-a129-45b4-b833-04493886c507"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.128017 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c17912-a129-45b4-b833-04493886c507-kube-api-access-l4kss" (OuterVolumeSpecName: "kube-api-access-l4kss") pod "e1c17912-a129-45b4-b833-04493886c507" (UID: "e1c17912-a129-45b4-b833-04493886c507"). InnerVolumeSpecName "kube-api-access-l4kss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.182199 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1c17912-a129-45b4-b833-04493886c507" (UID: "e1c17912-a129-45b4-b833-04493886c507"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.198315 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.220456 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4kss\" (UniqueName: \"kubernetes.io/projected/e1c17912-a129-45b4-b833-04493886c507-kube-api-access-l4kss\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.220481 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.220491 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c17912-a129-45b4-b833-04493886c507-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.237824 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.246363 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.251893 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.310912 4889 generic.go:334] "Generic (PLEG): container finished" podID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerID="3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783" exitCode=0 Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.310974 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f887" event={"ID":"33bff935-df7b-4a61-8cab-84f408c1c9de","Type":"ContainerDied","Data":"3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.311005 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f887" event={"ID":"33bff935-df7b-4a61-8cab-84f408c1c9de","Type":"ContainerDied","Data":"5020b5ced62aa41ef56bfb0ed71a12ee9cb6b73db2b3ffe1e9f79aa599770f26"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.311023 4889 scope.go:117] "RemoveContainer" containerID="3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.311149 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8f887" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.314325 4889 generic.go:334] "Generic (PLEG): container finished" podID="e1c17912-a129-45b4-b833-04493886c507" containerID="999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e" exitCode=0 Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.314381 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhttl" event={"ID":"e1c17912-a129-45b4-b833-04493886c507","Type":"ContainerDied","Data":"999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.314411 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhttl" event={"ID":"e1c17912-a129-45b4-b833-04493886c507","Type":"ContainerDied","Data":"5ca4bc7f70088fda546554ea2d01568caf73b05893484d0895005fb88c876486"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.314473 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhttl" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.318639 4889 generic.go:334] "Generic (PLEG): container finished" podID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerID="f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299" exitCode=0 Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.318776 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mw2sc" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.318808 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mw2sc" event={"ID":"214d7b41-e8c9-4e25-bf80-48ff31b4a29b","Type":"ContainerDied","Data":"f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.318858 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mw2sc" event={"ID":"214d7b41-e8c9-4e25-bf80-48ff31b4a29b","Type":"ContainerDied","Data":"f015e658852a75869cfa267662665092d411673ad1fb9d8226ee91c031fce0fe"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321262 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-kube-api-access-cdhmx\") pod \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321365 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-operator-metrics\") pod \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321405 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-utilities\") pod \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321430 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-trusted-ca\") pod \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\" (UID: \"56d26fb0-3c51-4131-ab05-3e0e407bd9dd\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321465 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgzbx\" (UniqueName: \"kubernetes.io/projected/33bff935-df7b-4a61-8cab-84f408c1c9de-kube-api-access-jgzbx\") pod \"33bff935-df7b-4a61-8cab-84f408c1c9de\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321494 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-catalog-content\") pod \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321530 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-catalog-content\") pod \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321554 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-utilities\") pod \"33bff935-df7b-4a61-8cab-84f408c1c9de\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321612 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhw2l\" (UniqueName: \"kubernetes.io/projected/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-kube-api-access-fhw2l\") pod \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321679 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-catalog-content\") pod \"33bff935-df7b-4a61-8cab-84f408c1c9de\" (UID: \"33bff935-df7b-4a61-8cab-84f408c1c9de\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321720 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh8vg\" (UniqueName: \"kubernetes.io/projected/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-kube-api-access-bh8vg\") pod \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\" (UID: \"214d7b41-e8c9-4e25-bf80-48ff31b4a29b\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.321740 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-utilities\") pod \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\" (UID: \"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b\") " Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.322757 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-utilities" (OuterVolumeSpecName: "utilities") pod "214d7b41-e8c9-4e25-bf80-48ff31b4a29b" (UID: "214d7b41-e8c9-4e25-bf80-48ff31b4a29b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.323011 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-utilities" (OuterVolumeSpecName: "utilities") pod "95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" (UID: "95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.324211 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "56d26fb0-3c51-4131-ab05-3e0e407bd9dd" (UID: "56d26fb0-3c51-4131-ab05-3e0e407bd9dd"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.324487 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-utilities" (OuterVolumeSpecName: "utilities") pod "33bff935-df7b-4a61-8cab-84f408c1c9de" (UID: "33bff935-df7b-4a61-8cab-84f408c1c9de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.325316 4889 generic.go:334] "Generic (PLEG): container finished" podID="56d26fb0-3c51-4131-ab05-3e0e407bd9dd" containerID="1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad" exitCode=0 Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.325403 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" event={"ID":"56d26fb0-3c51-4131-ab05-3e0e407bd9dd","Type":"ContainerDied","Data":"1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.325435 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" event={"ID":"56d26fb0-3c51-4131-ab05-3e0e407bd9dd","Type":"ContainerDied","Data":"4acecc7884ca452a7b93e8d32f72bb01ab7e13c43b74c048ce5bf3cc94f98d20"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.325526 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2tcxh" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.326518 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-kube-api-access-bh8vg" (OuterVolumeSpecName: "kube-api-access-bh8vg") pod "214d7b41-e8c9-4e25-bf80-48ff31b4a29b" (UID: "214d7b41-e8c9-4e25-bf80-48ff31b4a29b"). InnerVolumeSpecName "kube-api-access-bh8vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.331032 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-kube-api-access-cdhmx" (OuterVolumeSpecName: "kube-api-access-cdhmx") pod "56d26fb0-3c51-4131-ab05-3e0e407bd9dd" (UID: "56d26fb0-3c51-4131-ab05-3e0e407bd9dd"). InnerVolumeSpecName "kube-api-access-cdhmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.332317 4889 generic.go:334] "Generic (PLEG): container finished" podID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerID="64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90" exitCode=0 Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.332351 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxh5k" event={"ID":"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b","Type":"ContainerDied","Data":"64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.332376 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxh5k" event={"ID":"95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b","Type":"ContainerDied","Data":"b1634f6a4135dafb6282d7d187ced351e892910752b8051ca6c7afd2e88c704e"} Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.332431 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxh5k" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.350728 4889 scope.go:117] "RemoveContainer" containerID="813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.351411 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-kube-api-access-fhw2l" (OuterVolumeSpecName: "kube-api-access-fhw2l") pod "95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" (UID: "95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b"). InnerVolumeSpecName "kube-api-access-fhw2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.352617 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "56d26fb0-3c51-4131-ab05-3e0e407bd9dd" (UID: "56d26fb0-3c51-4131-ab05-3e0e407bd9dd"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.353257 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bff935-df7b-4a61-8cab-84f408c1c9de-kube-api-access-jgzbx" (OuterVolumeSpecName: "kube-api-access-jgzbx") pod "33bff935-df7b-4a61-8cab-84f408c1c9de" (UID: "33bff935-df7b-4a61-8cab-84f408c1c9de"). InnerVolumeSpecName "kube-api-access-jgzbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.361069 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "214d7b41-e8c9-4e25-bf80-48ff31b4a29b" (UID: "214d7b41-e8c9-4e25-bf80-48ff31b4a29b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.372278 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhttl"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.374930 4889 scope.go:117] "RemoveContainer" containerID="3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.376204 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vhttl"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.389178 4889 scope.go:117] "RemoveContainer" containerID="3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.389571 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783\": container with ID starting with 3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783 not found: ID does not exist" containerID="3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.389601 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783"} err="failed to get container status \"3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783\": rpc error: code = NotFound desc = could not find container \"3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783\": container with ID starting with 3ceff9ded720267b7e60e91618df2b2af97b74053530b0fa927cc576989e5783 not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.389624 4889 scope.go:117] "RemoveContainer" containerID="813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.389842 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911\": container with ID starting with 813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911 not found: ID does not exist" containerID="813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.389862 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911"} err="failed to get container status \"813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911\": rpc error: code = NotFound desc = could not find container \"813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911\": container with ID starting with 813052e64136ffceddd0157a8190a4ad59cb2bf7d1147a6c40df5fcae93ed911 not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.389873 4889 scope.go:117] "RemoveContainer" containerID="3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.390055 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea\": container with ID starting with 3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea not found: ID does not exist" containerID="3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.390073 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea"} err="failed to get container status \"3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea\": rpc error: code = NotFound desc = could not find container \"3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea\": container with ID starting with 3b1558b7e5008bd1a826a22d7c912f591c717418981c7059c53df53f481dcfea not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.390086 4889 scope.go:117] "RemoveContainer" containerID="999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.400819 4889 scope.go:117] "RemoveContainer" containerID="050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.413257 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" (UID: "95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.415569 4889 scope.go:117] "RemoveContainer" containerID="fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422783 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422806 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422817 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhw2l\" (UniqueName: \"kubernetes.io/projected/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-kube-api-access-fhw2l\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422826 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh8vg\" (UniqueName: \"kubernetes.io/projected/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-kube-api-access-bh8vg\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422834 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422843 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-kube-api-access-cdhmx\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422852 4889 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422861 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d7b41-e8c9-4e25-bf80-48ff31b4a29b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422871 4889 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d26fb0-3c51-4131-ab05-3e0e407bd9dd-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422880 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgzbx\" (UniqueName: \"kubernetes.io/projected/33bff935-df7b-4a61-8cab-84f408c1c9de-kube-api-access-jgzbx\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.422888 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.426373 4889 scope.go:117] "RemoveContainer" containerID="999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.426763 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e\": container with ID starting with 999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e not found: ID does not exist" containerID="999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.426794 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e"} err="failed to get container status \"999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e\": rpc error: code = NotFound desc = could not find container \"999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e\": container with ID starting with 999315f44957aee3f625e57b5b3bc8b43bb7f5a2b446ea28368c51f35b3aa23e not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.426818 4889 scope.go:117] "RemoveContainer" containerID="050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.427120 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d\": container with ID starting with 050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d not found: ID does not exist" containerID="050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.427138 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d"} err="failed to get container status \"050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d\": rpc error: code = NotFound desc = could not find container \"050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d\": container with ID starting with 050e4ba30baae12c37b32a36b1a65af27e91714e5b646f00d788c88c842f441d not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.427149 4889 scope.go:117] "RemoveContainer" containerID="fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.427363 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993\": container with ID starting with fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993 not found: ID does not exist" containerID="fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.427387 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993"} err="failed to get container status \"fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993\": rpc error: code = NotFound desc = could not find container \"fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993\": container with ID starting with fe4689aaea87783484775e0c0ce85b4468cc7342aaf6932777c29637c2902993 not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.427402 4889 scope.go:117] "RemoveContainer" containerID="f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.438272 4889 scope.go:117] "RemoveContainer" containerID="64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.449421 4889 scope.go:117] "RemoveContainer" containerID="2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.462100 4889 scope.go:117] "RemoveContainer" containerID="f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.462537 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299\": container with ID starting with f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299 not found: ID does not exist" containerID="f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.462574 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299"} err="failed to get container status \"f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299\": rpc error: code = NotFound desc = could not find container \"f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299\": container with ID starting with f39c4a520f893344d985c867d4a321e38ea986401b7134c92566b82c238c1299 not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.462601 4889 scope.go:117] "RemoveContainer" containerID="64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.463207 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25\": container with ID starting with 64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25 not found: ID does not exist" containerID="64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.463246 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25"} err="failed to get container status \"64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25\": rpc error: code = NotFound desc = could not find container \"64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25\": container with ID starting with 64eba1524f5f48ce9d07852d87a4bd3abe189c43765889c2e1bfa85501acde25 not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.463261 4889 scope.go:117] "RemoveContainer" containerID="2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.463807 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc\": container with ID starting with 2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc not found: ID does not exist" containerID="2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.463853 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc"} err="failed to get container status \"2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc\": rpc error: code = NotFound desc = could not find container \"2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc\": container with ID starting with 2d7a1532d9cbc5b1f6cbcead7b5df02905b037604ede6824dad81ab4dd0e06dc not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.463885 4889 scope.go:117] "RemoveContainer" containerID="1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.466265 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33bff935-df7b-4a61-8cab-84f408c1c9de" (UID: "33bff935-df7b-4a61-8cab-84f408c1c9de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.476522 4889 scope.go:117] "RemoveContainer" containerID="1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.476940 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad\": container with ID starting with 1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad not found: ID does not exist" containerID="1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.476973 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad"} err="failed to get container status \"1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad\": rpc error: code = NotFound desc = could not find container \"1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad\": container with ID starting with 1880af853d5b5d4bcc638bda61335b5115dcc523233a2a4999f58af6b80b7dad not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.476998 4889 scope.go:117] "RemoveContainer" containerID="64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.487555 4889 scope.go:117] "RemoveContainer" containerID="cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.502346 4889 scope.go:117] "RemoveContainer" containerID="6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.513716 4889 scope.go:117] "RemoveContainer" containerID="64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.514189 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90\": container with ID starting with 64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90 not found: ID does not exist" containerID="64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.514232 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90"} err="failed to get container status \"64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90\": rpc error: code = NotFound desc = could not find container \"64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90\": container with ID starting with 64e7b3007306fa507f67cfa0065bd064c445e6cb13505a9f96b2d2029c5c9f90 not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.514262 4889 scope.go:117] "RemoveContainer" containerID="cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.514698 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc\": container with ID starting with cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc not found: ID does not exist" containerID="cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.514755 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc"} err="failed to get container status \"cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc\": rpc error: code = NotFound desc = could not find container \"cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc\": container with ID starting with cc9bffd0b5258e8511e9860678fc233aa1b2c8565a8a2376dfc1a2dd318e92fc not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.514783 4889 scope.go:117] "RemoveContainer" containerID="6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d" Nov 28 06:53:36 crc kubenswrapper[4889]: E1128 06:53:36.515138 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d\": container with ID starting with 6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d not found: ID does not exist" containerID="6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.515168 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d"} err="failed to get container status \"6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d\": rpc error: code = NotFound desc = could not find container \"6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d\": container with ID starting with 6159461cda32795bbc0e71a5326cedbadf09b772e35b1d346f42ce2cd4bdc42d not found: ID does not exist" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.524292 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33bff935-df7b-4a61-8cab-84f408c1c9de-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.592276 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ktxlv"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.643440 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8f887"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.648200 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8f887"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.667854 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mw2sc"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.671116 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mw2sc"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.679145 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2tcxh"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.684009 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2tcxh"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.693134 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxh5k"] Nov 28 06:53:36 crc kubenswrapper[4889]: I1128 06:53:36.698256 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxh5k"] Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.341020 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" path="/var/lib/kubelet/pods/214d7b41-e8c9-4e25-bf80-48ff31b4a29b/volumes" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.341760 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" path="/var/lib/kubelet/pods/33bff935-df7b-4a61-8cab-84f408c1c9de/volumes" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.342455 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d26fb0-3c51-4131-ab05-3e0e407bd9dd" path="/var/lib/kubelet/pods/56d26fb0-3c51-4131-ab05-3e0e407bd9dd/volumes" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.343317 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" path="/var/lib/kubelet/pods/95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b/volumes" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.343901 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c17912-a129-45b4-b833-04493886c507" path="/var/lib/kubelet/pods/e1c17912-a129-45b4-b833-04493886c507/volumes" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.349058 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" event={"ID":"93f9f385-e809-4bca-b770-f6967eaa5578","Type":"ContainerStarted","Data":"373032779e77b027904146cc0139d680f82568ef44ed1f1ec5cec2941cce4114"} Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.349106 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" event={"ID":"93f9f385-e809-4bca-b770-f6967eaa5578","Type":"ContainerStarted","Data":"0abb3cf30347d0c4e1b4a76ae74536acc7025c8d7e4a3e40d48bb1533e093c65"} Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.349887 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.352891 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.394109 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ktxlv" podStartSLOduration=2.394083057 podStartE2EDuration="2.394083057s" podCreationTimestamp="2025-11-28 06:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:53:37.369860626 +0000 UTC m=+340.340094801" watchObservedRunningTime="2025-11-28 06:53:37.394083057 +0000 UTC m=+340.364317212" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.757713 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5tscg"] Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.757917 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerName="extract-content" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.757930 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerName="extract-content" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.757939 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.757944 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.757953 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c17912-a129-45b4-b833-04493886c507" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.757959 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c17912-a129-45b4-b833-04493886c507" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.757969 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="extract-content" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.757975 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="extract-content" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.757984 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerName="extract-utilities" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.757990 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerName="extract-utilities" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.758000 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d26fb0-3c51-4131-ab05-3e0e407bd9dd" containerName="marketplace-operator" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758006 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d26fb0-3c51-4131-ab05-3e0e407bd9dd" containerName="marketplace-operator" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.758014 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758019 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.758047 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerName="extract-content" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758053 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerName="extract-content" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.758062 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="extract-utilities" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758069 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="extract-utilities" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.758080 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758085 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.758091 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c17912-a129-45b4-b833-04493886c507" containerName="extract-content" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758097 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c17912-a129-45b4-b833-04493886c507" containerName="extract-content" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.758104 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c17912-a129-45b4-b833-04493886c507" containerName="extract-utilities" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758109 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c17912-a129-45b4-b833-04493886c507" containerName="extract-utilities" Nov 28 06:53:37 crc kubenswrapper[4889]: E1128 06:53:37.758117 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerName="extract-utilities" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758123 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerName="extract-utilities" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758328 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bff935-df7b-4a61-8cab-84f408c1c9de" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758338 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c1b472-fa3f-4e55-ac8e-b7e2083d8a3b" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758346 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d26fb0-3c51-4131-ab05-3e0e407bd9dd" containerName="marketplace-operator" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758353 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="214d7b41-e8c9-4e25-bf80-48ff31b4a29b" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.758363 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c17912-a129-45b4-b833-04493886c507" containerName="registry-server" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.759050 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.761610 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.770065 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5tscg"] Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.840766 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e790ac24-fca9-4d15-942d-2469cdf17620-catalog-content\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.840888 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e790ac24-fca9-4d15-942d-2469cdf17620-utilities\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.840914 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffwbp\" (UniqueName: \"kubernetes.io/projected/e790ac24-fca9-4d15-942d-2469cdf17620-kube-api-access-ffwbp\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.941417 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e790ac24-fca9-4d15-942d-2469cdf17620-catalog-content\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.941520 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e790ac24-fca9-4d15-942d-2469cdf17620-utilities\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.941544 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffwbp\" (UniqueName: \"kubernetes.io/projected/e790ac24-fca9-4d15-942d-2469cdf17620-kube-api-access-ffwbp\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.941983 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e790ac24-fca9-4d15-942d-2469cdf17620-catalog-content\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.942169 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e790ac24-fca9-4d15-942d-2469cdf17620-utilities\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.961443 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q6kxh"] Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.963448 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.971889 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.973056 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6kxh"] Nov 28 06:53:37 crc kubenswrapper[4889]: I1128 06:53:37.973272 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffwbp\" (UniqueName: \"kubernetes.io/projected/e790ac24-fca9-4d15-942d-2469cdf17620-kube-api-access-ffwbp\") pod \"certified-operators-5tscg\" (UID: \"e790ac24-fca9-4d15-942d-2469cdf17620\") " pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.042927 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-utilities\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.043026 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsll\" (UniqueName: \"kubernetes.io/projected/aa42c040-43a1-475c-99d0-f7bb57a22a74-kube-api-access-jvsll\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.043070 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-catalog-content\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.074093 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.144237 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvsll\" (UniqueName: \"kubernetes.io/projected/aa42c040-43a1-475c-99d0-f7bb57a22a74-kube-api-access-jvsll\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.144284 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-catalog-content\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.144342 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-utilities\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.144907 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-utilities\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.144968 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-catalog-content\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.164313 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvsll\" (UniqueName: \"kubernetes.io/projected/aa42c040-43a1-475c-99d0-f7bb57a22a74-kube-api-access-jvsll\") pod \"redhat-marketplace-q6kxh\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.305548 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.507397 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5tscg"] Nov 28 06:53:38 crc kubenswrapper[4889]: W1128 06:53:38.513667 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode790ac24_fca9_4d15_942d_2469cdf17620.slice/crio-0237544fc9bf6c646041be08c074c7a1b8200452fd4f1ef2548ffb438f6bfcaa WatchSource:0}: Error finding container 0237544fc9bf6c646041be08c074c7a1b8200452fd4f1ef2548ffb438f6bfcaa: Status 404 returned error can't find the container with id 0237544fc9bf6c646041be08c074c7a1b8200452fd4f1ef2548ffb438f6bfcaa Nov 28 06:53:38 crc kubenswrapper[4889]: I1128 06:53:38.699637 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6kxh"] Nov 28 06:53:38 crc kubenswrapper[4889]: W1128 06:53:38.718144 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa42c040_43a1_475c_99d0_f7bb57a22a74.slice/crio-e07450ffd5f0c1f38663628cd067cb6db2b062cf18cbc33809dbedbaf08cd99c WatchSource:0}: Error finding container e07450ffd5f0c1f38663628cd067cb6db2b062cf18cbc33809dbedbaf08cd99c: Status 404 returned error can't find the container with id e07450ffd5f0c1f38663628cd067cb6db2b062cf18cbc33809dbedbaf08cd99c Nov 28 06:53:39 crc kubenswrapper[4889]: I1128 06:53:39.367080 4889 generic.go:334] "Generic (PLEG): container finished" podID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerID="0860e9f35ddbf04eeb270b0193aaef8fab7cb2c2ce5c97dc303cc39b91d785e0" exitCode=0 Nov 28 06:53:39 crc kubenswrapper[4889]: I1128 06:53:39.367124 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6kxh" event={"ID":"aa42c040-43a1-475c-99d0-f7bb57a22a74","Type":"ContainerDied","Data":"0860e9f35ddbf04eeb270b0193aaef8fab7cb2c2ce5c97dc303cc39b91d785e0"} Nov 28 06:53:39 crc kubenswrapper[4889]: I1128 06:53:39.367166 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6kxh" event={"ID":"aa42c040-43a1-475c-99d0-f7bb57a22a74","Type":"ContainerStarted","Data":"e07450ffd5f0c1f38663628cd067cb6db2b062cf18cbc33809dbedbaf08cd99c"} Nov 28 06:53:39 crc kubenswrapper[4889]: I1128 06:53:39.370218 4889 generic.go:334] "Generic (PLEG): container finished" podID="e790ac24-fca9-4d15-942d-2469cdf17620" containerID="44656fb4244b06f76306a1923f0e28d41c90ad7e1df7e450913077015fc9684f" exitCode=0 Nov 28 06:53:39 crc kubenswrapper[4889]: I1128 06:53:39.370292 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tscg" event={"ID":"e790ac24-fca9-4d15-942d-2469cdf17620","Type":"ContainerDied","Data":"44656fb4244b06f76306a1923f0e28d41c90ad7e1df7e450913077015fc9684f"} Nov 28 06:53:39 crc kubenswrapper[4889]: I1128 06:53:39.370328 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tscg" event={"ID":"e790ac24-fca9-4d15-942d-2469cdf17620","Type":"ContainerStarted","Data":"0237544fc9bf6c646041be08c074c7a1b8200452fd4f1ef2548ffb438f6bfcaa"} Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.158005 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vgl8h"] Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.230024 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vgl8h"] Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.230190 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.235338 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.314961 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f2d783e-1610-4d7a-b93b-8c840dba16b6-utilities\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.315118 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbj98\" (UniqueName: \"kubernetes.io/projected/7f2d783e-1610-4d7a-b93b-8c840dba16b6-kube-api-access-tbj98\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.315191 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f2d783e-1610-4d7a-b93b-8c840dba16b6-catalog-content\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.360763 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zvwjp"] Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.362100 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.364749 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.372421 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvwjp"] Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.379192 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tscg" event={"ID":"e790ac24-fca9-4d15-942d-2469cdf17620","Type":"ContainerStarted","Data":"8fba5700631ed798abe92f42d17b8cbf4a1c568288790223c79dd3ce62ac439a"} Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.416699 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-catalog-content\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.416854 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbj98\" (UniqueName: \"kubernetes.io/projected/7f2d783e-1610-4d7a-b93b-8c840dba16b6-kube-api-access-tbj98\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.416931 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnx7\" (UniqueName: \"kubernetes.io/projected/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-kube-api-access-bwnx7\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.417024 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f2d783e-1610-4d7a-b93b-8c840dba16b6-catalog-content\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.417128 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-utilities\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.417188 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f2d783e-1610-4d7a-b93b-8c840dba16b6-utilities\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.417516 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f2d783e-1610-4d7a-b93b-8c840dba16b6-catalog-content\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.417689 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f2d783e-1610-4d7a-b93b-8c840dba16b6-utilities\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.434020 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbj98\" (UniqueName: \"kubernetes.io/projected/7f2d783e-1610-4d7a-b93b-8c840dba16b6-kube-api-access-tbj98\") pod \"redhat-operators-vgl8h\" (UID: \"7f2d783e-1610-4d7a-b93b-8c840dba16b6\") " pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.518498 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-utilities\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.518918 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-catalog-content\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.519700 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-utilities\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.519807 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-catalog-content\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.519849 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnx7\" (UniqueName: \"kubernetes.io/projected/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-kube-api-access-bwnx7\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.539179 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnx7\" (UniqueName: \"kubernetes.io/projected/94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7-kube-api-access-bwnx7\") pod \"community-operators-zvwjp\" (UID: \"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7\") " pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.554465 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.684817 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:40 crc kubenswrapper[4889]: I1128 06:53:40.950938 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vgl8h"] Nov 28 06:53:40 crc kubenswrapper[4889]: W1128 06:53:40.957451 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f2d783e_1610_4d7a_b93b_8c840dba16b6.slice/crio-b01bd140135b851e542f9f71fdd4a82c2adfa8cce754d32848874157e83d5ab2 WatchSource:0}: Error finding container b01bd140135b851e542f9f71fdd4a82c2adfa8cce754d32848874157e83d5ab2: Status 404 returned error can't find the container with id b01bd140135b851e542f9f71fdd4a82c2adfa8cce754d32848874157e83d5ab2 Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.065200 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvwjp"] Nov 28 06:53:41 crc kubenswrapper[4889]: W1128 06:53:41.072755 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94cbc9b6_1be5_4d8f_8fe2_fe4c191b45d7.slice/crio-5bdfc8d04fe181c94cee2b75257cad4ffed63b81d190da48c76f9f05c53e2912 WatchSource:0}: Error finding container 5bdfc8d04fe181c94cee2b75257cad4ffed63b81d190da48c76f9f05c53e2912: Status 404 returned error can't find the container with id 5bdfc8d04fe181c94cee2b75257cad4ffed63b81d190da48c76f9f05c53e2912 Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.389875 4889 generic.go:334] "Generic (PLEG): container finished" podID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerID="6bdc26c1fc36275701cee2dcc4083e4ca77bc0d19e219b12b6baf96522d6afd3" exitCode=0 Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.389944 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6kxh" event={"ID":"aa42c040-43a1-475c-99d0-f7bb57a22a74","Type":"ContainerDied","Data":"6bdc26c1fc36275701cee2dcc4083e4ca77bc0d19e219b12b6baf96522d6afd3"} Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.392987 4889 generic.go:334] "Generic (PLEG): container finished" podID="7f2d783e-1610-4d7a-b93b-8c840dba16b6" containerID="c347b1c47d7a1fb580dcd826961d625af3bee586b7c9daccb702110ffaa9848d" exitCode=0 Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.393648 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgl8h" event={"ID":"7f2d783e-1610-4d7a-b93b-8c840dba16b6","Type":"ContainerDied","Data":"c347b1c47d7a1fb580dcd826961d625af3bee586b7c9daccb702110ffaa9848d"} Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.393674 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgl8h" event={"ID":"7f2d783e-1610-4d7a-b93b-8c840dba16b6","Type":"ContainerStarted","Data":"b01bd140135b851e542f9f71fdd4a82c2adfa8cce754d32848874157e83d5ab2"} Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.398989 4889 generic.go:334] "Generic (PLEG): container finished" podID="e790ac24-fca9-4d15-942d-2469cdf17620" containerID="8fba5700631ed798abe92f42d17b8cbf4a1c568288790223c79dd3ce62ac439a" exitCode=0 Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.399066 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tscg" event={"ID":"e790ac24-fca9-4d15-942d-2469cdf17620","Type":"ContainerDied","Data":"8fba5700631ed798abe92f42d17b8cbf4a1c568288790223c79dd3ce62ac439a"} Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.410420 4889 generic.go:334] "Generic (PLEG): container finished" podID="94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7" containerID="fb5885a8028fa7f478d05d9a3fcb9f341f5734f33cd2d063b930773bd10d409f" exitCode=0 Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.410465 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvwjp" event={"ID":"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7","Type":"ContainerDied","Data":"fb5885a8028fa7f478d05d9a3fcb9f341f5734f33cd2d063b930773bd10d409f"} Nov 28 06:53:41 crc kubenswrapper[4889]: I1128 06:53:41.410497 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvwjp" event={"ID":"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7","Type":"ContainerStarted","Data":"5bdfc8d04fe181c94cee2b75257cad4ffed63b81d190da48c76f9f05c53e2912"} Nov 28 06:53:42 crc kubenswrapper[4889]: I1128 06:53:42.419460 4889 generic.go:334] "Generic (PLEG): container finished" podID="94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7" containerID="e68c924875ba1b9db5bad4af3173ce49ec2fb2e869ca3d1c8776e7747de80fe8" exitCode=0 Nov 28 06:53:42 crc kubenswrapper[4889]: I1128 06:53:42.419555 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvwjp" event={"ID":"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7","Type":"ContainerDied","Data":"e68c924875ba1b9db5bad4af3173ce49ec2fb2e869ca3d1c8776e7747de80fe8"} Nov 28 06:53:42 crc kubenswrapper[4889]: I1128 06:53:42.424329 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6kxh" event={"ID":"aa42c040-43a1-475c-99d0-f7bb57a22a74","Type":"ContainerStarted","Data":"b7adce3d0d2d02f029f3298d01c0fb6823ea530fe12bf6751c7df7cfe4449023"} Nov 28 06:53:42 crc kubenswrapper[4889]: I1128 06:53:42.426476 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgl8h" event={"ID":"7f2d783e-1610-4d7a-b93b-8c840dba16b6","Type":"ContainerStarted","Data":"a2ad2864637f34bee7fa4db5897b879fb23ea5f806ffb4c7d5df2058bb7cc0de"} Nov 28 06:53:42 crc kubenswrapper[4889]: I1128 06:53:42.430868 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5tscg" event={"ID":"e790ac24-fca9-4d15-942d-2469cdf17620","Type":"ContainerStarted","Data":"60f5104736713db084098744dedfcab1b8d8260e32d89d6c2c13004081fbd5cb"} Nov 28 06:53:42 crc kubenswrapper[4889]: I1128 06:53:42.480074 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5tscg" podStartSLOduration=3.005542534 podStartE2EDuration="5.480055691s" podCreationTimestamp="2025-11-28 06:53:37 +0000 UTC" firstStartedPulling="2025-11-28 06:53:39.372645704 +0000 UTC m=+342.342879859" lastFinishedPulling="2025-11-28 06:53:41.847158851 +0000 UTC m=+344.817393016" observedRunningTime="2025-11-28 06:53:42.476595246 +0000 UTC m=+345.446829391" watchObservedRunningTime="2025-11-28 06:53:42.480055691 +0000 UTC m=+345.450289846" Nov 28 06:53:42 crc kubenswrapper[4889]: I1128 06:53:42.497649 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q6kxh" podStartSLOduration=2.979022427 podStartE2EDuration="5.497627569s" podCreationTimestamp="2025-11-28 06:53:37 +0000 UTC" firstStartedPulling="2025-11-28 06:53:39.368479592 +0000 UTC m=+342.338713747" lastFinishedPulling="2025-11-28 06:53:41.887084734 +0000 UTC m=+344.857318889" observedRunningTime="2025-11-28 06:53:42.494248427 +0000 UTC m=+345.464482592" watchObservedRunningTime="2025-11-28 06:53:42.497627569 +0000 UTC m=+345.467861724" Nov 28 06:53:43 crc kubenswrapper[4889]: I1128 06:53:43.437297 4889 generic.go:334] "Generic (PLEG): container finished" podID="7f2d783e-1610-4d7a-b93b-8c840dba16b6" containerID="a2ad2864637f34bee7fa4db5897b879fb23ea5f806ffb4c7d5df2058bb7cc0de" exitCode=0 Nov 28 06:53:43 crc kubenswrapper[4889]: I1128 06:53:43.437474 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgl8h" event={"ID":"7f2d783e-1610-4d7a-b93b-8c840dba16b6","Type":"ContainerDied","Data":"a2ad2864637f34bee7fa4db5897b879fb23ea5f806ffb4c7d5df2058bb7cc0de"} Nov 28 06:53:44 crc kubenswrapper[4889]: I1128 06:53:44.452915 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvwjp" event={"ID":"94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7","Type":"ContainerStarted","Data":"6fa3f05034d152a50ac9877ab28e99a855a0835b108b8b19bd2d06cccd2c1d48"} Nov 28 06:53:44 crc kubenswrapper[4889]: I1128 06:53:44.479156 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zvwjp" podStartSLOduration=2.740208532 podStartE2EDuration="4.479130527s" podCreationTimestamp="2025-11-28 06:53:40 +0000 UTC" firstStartedPulling="2025-11-28 06:53:41.414307828 +0000 UTC m=+344.384541983" lastFinishedPulling="2025-11-28 06:53:43.153229803 +0000 UTC m=+346.123463978" observedRunningTime="2025-11-28 06:53:44.472624558 +0000 UTC m=+347.442858793" watchObservedRunningTime="2025-11-28 06:53:44.479130527 +0000 UTC m=+347.449364682" Nov 28 06:53:45 crc kubenswrapper[4889]: I1128 06:53:45.460966 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgl8h" event={"ID":"7f2d783e-1610-4d7a-b93b-8c840dba16b6","Type":"ContainerStarted","Data":"e1355056d069fd7960a69ffed07b45227487757bae3c48a8309c49c6f9a5b4ec"} Nov 28 06:53:45 crc kubenswrapper[4889]: I1128 06:53:45.481360 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vgl8h" podStartSLOduration=2.953030282 podStartE2EDuration="5.481344261s" podCreationTimestamp="2025-11-28 06:53:40 +0000 UTC" firstStartedPulling="2025-11-28 06:53:41.394605218 +0000 UTC m=+344.364839373" lastFinishedPulling="2025-11-28 06:53:43.922919197 +0000 UTC m=+346.893153352" observedRunningTime="2025-11-28 06:53:45.479630679 +0000 UTC m=+348.449864844" watchObservedRunningTime="2025-11-28 06:53:45.481344261 +0000 UTC m=+348.451578416" Nov 28 06:53:45 crc kubenswrapper[4889]: I1128 06:53:45.959848 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5td62" Nov 28 06:53:46 crc kubenswrapper[4889]: I1128 06:53:46.078673 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjpk7"] Nov 28 06:53:48 crc kubenswrapper[4889]: I1128 06:53:48.074358 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:48 crc kubenswrapper[4889]: I1128 06:53:48.074769 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:48 crc kubenswrapper[4889]: I1128 06:53:48.120279 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:48 crc kubenswrapper[4889]: I1128 06:53:48.305947 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:48 crc kubenswrapper[4889]: I1128 06:53:48.306227 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:48 crc kubenswrapper[4889]: I1128 06:53:48.348371 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:48 crc kubenswrapper[4889]: I1128 06:53:48.512177 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5tscg" Nov 28 06:53:48 crc kubenswrapper[4889]: I1128 06:53:48.523520 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 06:53:50 crc kubenswrapper[4889]: I1128 06:53:50.555688 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:50 crc kubenswrapper[4889]: I1128 06:53:50.556869 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:50 crc kubenswrapper[4889]: I1128 06:53:50.605908 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:50 crc kubenswrapper[4889]: I1128 06:53:50.685322 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:50 crc kubenswrapper[4889]: I1128 06:53:50.685857 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:50 crc kubenswrapper[4889]: I1128 06:53:50.720362 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:51 crc kubenswrapper[4889]: I1128 06:53:51.574144 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zvwjp" Nov 28 06:53:51 crc kubenswrapper[4889]: I1128 06:53:51.579397 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vgl8h" Nov 28 06:53:58 crc kubenswrapper[4889]: I1128 06:53:58.782876 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:53:58 crc kubenswrapper[4889]: I1128 06:53:58.784500 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.122413 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" podUID="7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" containerName="registry" containerID="cri-o://58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c" gracePeriod=30 Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.542513 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.636657 4889 generic.go:334] "Generic (PLEG): container finished" podID="7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" containerID="58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c" exitCode=0 Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.636753 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.636761 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" event={"ID":"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d","Type":"ContainerDied","Data":"58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c"} Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.636851 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kjpk7" event={"ID":"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d","Type":"ContainerDied","Data":"15cae44dd65b563875415af82b31eb6cc86794d3744abf6651551a5e359738e3"} Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.636889 4889 scope.go:117] "RemoveContainer" containerID="58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.659606 4889 scope.go:117] "RemoveContainer" containerID="58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660108 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgs6w\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-kube-api-access-pgs6w\") pod \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660228 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-trusted-ca\") pod \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660314 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-installation-pull-secrets\") pod \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660353 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-bound-sa-token\") pod \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " Nov 28 06:54:11 crc kubenswrapper[4889]: E1128 06:54:11.660311 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c\": container with ID starting with 58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c not found: ID does not exist" containerID="58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660408 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-tls\") pod \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660422 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c"} err="failed to get container status \"58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c\": rpc error: code = NotFound desc = could not find container \"58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c\": container with ID starting with 58418426ad35cbe1cb9486956d618525d002bd538922303b8f55f7408985598c not found: ID does not exist" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660513 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-ca-trust-extracted\") pod \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660635 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-certificates\") pod \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.660973 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\" (UID: \"7fb60f8c-3844-43e7-bc7a-a83e7c9f964d\") " Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.661644 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.662033 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.666296 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.666860 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-kube-api-access-pgs6w" (OuterVolumeSpecName: "kube-api-access-pgs6w") pod "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d"). InnerVolumeSpecName "kube-api-access-pgs6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.667058 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.670526 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.681611 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.689840 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" (UID: "7fb60f8c-3844-43e7-bc7a-a83e7c9f964d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.762489 4889 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.762546 4889 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.762558 4889 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.762594 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgs6w\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-kube-api-access-pgs6w\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.762608 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.762620 4889 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.762632 4889 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.970168 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjpk7"] Nov 28 06:54:11 crc kubenswrapper[4889]: I1128 06:54:11.976392 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjpk7"] Nov 28 06:54:13 crc kubenswrapper[4889]: I1128 06:54:13.345475 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" path="/var/lib/kubelet/pods/7fb60f8c-3844-43e7-bc7a-a83e7c9f964d/volumes" Nov 28 06:54:28 crc kubenswrapper[4889]: I1128 06:54:28.782692 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:54:28 crc kubenswrapper[4889]: I1128 06:54:28.783297 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:54:58 crc kubenswrapper[4889]: I1128 06:54:58.782483 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:54:58 crc kubenswrapper[4889]: I1128 06:54:58.783040 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:54:58 crc kubenswrapper[4889]: I1128 06:54:58.783084 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:54:58 crc kubenswrapper[4889]: I1128 06:54:58.783555 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76dd7acd3eaf576a87373e71bc06c8f9b006b2f4d1a51df32d2690f03d71b3d5"} pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:54:58 crc kubenswrapper[4889]: I1128 06:54:58.783610 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" containerID="cri-o://76dd7acd3eaf576a87373e71bc06c8f9b006b2f4d1a51df32d2690f03d71b3d5" gracePeriod=600 Nov 28 06:54:58 crc kubenswrapper[4889]: I1128 06:54:58.906346 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerID="76dd7acd3eaf576a87373e71bc06c8f9b006b2f4d1a51df32d2690f03d71b3d5" exitCode=0 Nov 28 06:54:58 crc kubenswrapper[4889]: I1128 06:54:58.906489 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerDied","Data":"76dd7acd3eaf576a87373e71bc06c8f9b006b2f4d1a51df32d2690f03d71b3d5"} Nov 28 06:54:58 crc kubenswrapper[4889]: I1128 06:54:58.906531 4889 scope.go:117] "RemoveContainer" containerID="7a8bea85bee18a02b0788834ed9b5748e8780f30b1d173402122b2dcc315280f" Nov 28 06:54:59 crc kubenswrapper[4889]: I1128 06:54:59.914884 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"7cb3b598692f9ebef6839e9935cad4d68f3c8d646dc9a22d7d400e870e77c284"} Nov 28 06:57:28 crc kubenswrapper[4889]: I1128 06:57:28.783122 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:57:28 crc kubenswrapper[4889]: I1128 06:57:28.783850 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:57:58 crc kubenswrapper[4889]: I1128 06:57:58.783146 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:57:58 crc kubenswrapper[4889]: I1128 06:57:58.784821 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:58:28 crc kubenswrapper[4889]: I1128 06:58:28.782960 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:58:28 crc kubenswrapper[4889]: I1128 06:58:28.784167 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:58:28 crc kubenswrapper[4889]: I1128 06:58:28.784322 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 06:58:28 crc kubenswrapper[4889]: I1128 06:58:28.786116 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cb3b598692f9ebef6839e9935cad4d68f3c8d646dc9a22d7d400e870e77c284"} pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:58:28 crc kubenswrapper[4889]: I1128 06:58:28.786250 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" containerID="cri-o://7cb3b598692f9ebef6839e9935cad4d68f3c8d646dc9a22d7d400e870e77c284" gracePeriod=600 Nov 28 06:58:29 crc kubenswrapper[4889]: I1128 06:58:29.511234 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerID="7cb3b598692f9ebef6839e9935cad4d68f3c8d646dc9a22d7d400e870e77c284" exitCode=0 Nov 28 06:58:29 crc kubenswrapper[4889]: I1128 06:58:29.511273 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerDied","Data":"7cb3b598692f9ebef6839e9935cad4d68f3c8d646dc9a22d7d400e870e77c284"} Nov 28 06:58:29 crc kubenswrapper[4889]: I1128 06:58:29.511568 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"7ebc63c9a59babecd1fd35c9530a11a72ee07b00bf300c1205eb3965dda30903"} Nov 28 06:58:29 crc kubenswrapper[4889]: I1128 06:58:29.511590 4889 scope.go:117] "RemoveContainer" containerID="76dd7acd3eaf576a87373e71bc06c8f9b006b2f4d1a51df32d2690f03d71b3d5" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.185053 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d"] Nov 28 07:00:00 crc kubenswrapper[4889]: E1128 07:00:00.185968 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" containerName="registry" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.185992 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" containerName="registry" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.186139 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb60f8c-3844-43e7-bc7a-a83e7c9f964d" containerName="registry" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.186674 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.189616 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.189734 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.195098 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d"] Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.263598 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-secret-volume\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.263672 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-config-volume\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.263876 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24zf\" (UniqueName: \"kubernetes.io/projected/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-kube-api-access-q24zf\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.364847 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-secret-volume\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.364920 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-config-volume\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.364960 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q24zf\" (UniqueName: \"kubernetes.io/projected/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-kube-api-access-q24zf\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.365855 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-config-volume\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.370660 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-secret-volume\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.380974 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24zf\" (UniqueName: \"kubernetes.io/projected/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-kube-api-access-q24zf\") pod \"collect-profiles-29405220-rhd4d\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.512318 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.671240 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2l6bn"] Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.672483 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovn-controller" containerID="cri-o://60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f" gracePeriod=30 Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.672508 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="nbdb" containerID="cri-o://d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266" gracePeriod=30 Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.672556 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25" gracePeriod=30 Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.672618 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kube-rbac-proxy-node" containerID="cri-o://0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd" gracePeriod=30 Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.672672 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovn-acl-logging" containerID="cri-o://9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a" gracePeriod=30 Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.672687 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="northd" containerID="cri-o://e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e" gracePeriod=30 Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.672794 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="sbdb" containerID="cri-o://f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180" gracePeriod=30 Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.733222 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" containerID="cri-o://14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" gracePeriod=30 Nov 28 07:00:00 crc kubenswrapper[4889]: E1128 07:00:00.749800 4889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(a128833589a10c1548e8f0b420112422787aadc82f7fe367cc68f7078cce5fb6): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29405220-rhd4d to CNI network \"multus-cni-network\": plugin type=\"multus-shim\" name=\"multus-cni-network\" failed (add): CmdAdd (shim): failed to send CNI request: Post \"http://dummy/cni\": EOF: StdinData: {\"binDir\":\"/var/lib/cni/bin\",\"clusterNetwork\":\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\",\"cniVersion\":\"0.3.1\",\"daemonSocketDir\":\"/run/multus/socket\",\"globalNamespaces\":\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\",\"logLevel\":\"verbose\",\"logToStderr\":true,\"name\":\"multus-cni-network\",\"namespaceIsolation\":true,\"type\":\"multus-shim\"}" Nov 28 07:00:00 crc kubenswrapper[4889]: E1128 07:00:00.749898 4889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(a128833589a10c1548e8f0b420112422787aadc82f7fe367cc68f7078cce5fb6): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29405220-rhd4d to CNI network \"multus-cni-network\": plugin type=\"multus-shim\" name=\"multus-cni-network\" failed (add): CmdAdd (shim): failed to send CNI request: Post \"http://dummy/cni\": EOF: StdinData: {\"binDir\":\"/var/lib/cni/bin\",\"clusterNetwork\":\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\",\"cniVersion\":\"0.3.1\",\"daemonSocketDir\":\"/run/multus/socket\",\"globalNamespaces\":\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\",\"logLevel\":\"verbose\",\"logToStderr\":true,\"name\":\"multus-cni-network\",\"namespaceIsolation\":true,\"type\":\"multus-shim\"}" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: E1128 07:00:00.749930 4889 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(a128833589a10c1548e8f0b420112422787aadc82f7fe367cc68f7078cce5fb6): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29405220-rhd4d to CNI network \"multus-cni-network\": plugin type=\"multus-shim\" name=\"multus-cni-network\" failed (add): CmdAdd (shim): failed to send CNI request: Post \"http://dummy/cni\": EOF: StdinData: {\"binDir\":\"/var/lib/cni/bin\",\"clusterNetwork\":\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\",\"cniVersion\":\"0.3.1\",\"daemonSocketDir\":\"/run/multus/socket\",\"globalNamespaces\":\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\",\"logLevel\":\"verbose\",\"logToStderr\":true,\"name\":\"multus-cni-network\",\"namespaceIsolation\":true,\"type\":\"multus-shim\"}" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:00 crc kubenswrapper[4889]: E1128 07:00:00.750012 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager(4e499ea2-0c14-4895-bff8-a67adcd6b0c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager(4e499ea2-0c14-4895-bff8-a67adcd6b0c6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(a128833589a10c1548e8f0b420112422787aadc82f7fe367cc68f7078cce5fb6): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29405220-rhd4d to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): failed to send CNI request: Post \\\"http://dummy/cni\\\": EOF: StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" podUID="4e499ea2-0c14-4895-bff8-a67adcd6b0c6" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.948758 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/3.log" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.952195 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovn-acl-logging/0.log" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.952631 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovn-controller/0.log" Nov 28 07:00:00 crc kubenswrapper[4889]: I1128 07:00:00.953068 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.003089 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovnkube-controller/3.log" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.005818 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovn-acl-logging/0.log" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.007211 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2l6bn_6de1d273-3dcf-4772-bc88-323f46e1ead5/ovn-controller/0.log" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009063 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" exitCode=0 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009092 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180" exitCode=0 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009102 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266" exitCode=0 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009113 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e" exitCode=0 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009121 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25" exitCode=0 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009129 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd" exitCode=0 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009136 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a" exitCode=143 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009144 4889 generic.go:334] "Generic (PLEG): container finished" podID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerID="60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f" exitCode=143 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009199 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009232 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009246 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009260 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009271 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009286 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009298 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009311 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009318 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009328 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009335 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009342 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009348 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009354 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009361 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009369 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009380 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009387 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009393 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009399 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009405 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009412 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009418 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009425 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009433 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009440 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009449 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009459 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009466 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009472 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009479 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009485 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009490 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009496 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009502 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009509 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009516 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009525 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" event={"ID":"6de1d273-3dcf-4772-bc88-323f46e1ead5","Type":"ContainerDied","Data":"a9829ea12def74cb959107004588798ce745c8954d3cd73c1eea2d9f52f78eab"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009536 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009543 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009550 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009558 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009565 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009574 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009581 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009589 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009596 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009603 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009619 4889 scope.go:117] "RemoveContainer" containerID="14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.009846 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2l6bn" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.011894 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lxpz8"] Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012112 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012125 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012158 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovn-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012167 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovn-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012179 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="nbdb" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012186 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="nbdb" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012199 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kubecfg-setup" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012205 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kubecfg-setup" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012217 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012224 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012233 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="northd" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012239 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="northd" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012248 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="sbdb" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012253 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="sbdb" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012261 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012266 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012274 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovn-acl-logging" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012287 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovn-acl-logging" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012294 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012299 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012308 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kube-rbac-proxy-node" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012314 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kube-rbac-proxy-node" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012320 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012325 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.012333 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012338 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012427 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012438 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovn-acl-logging" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012446 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012453 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012459 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="sbdb" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012466 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012475 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="nbdb" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012484 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovn-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012493 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kube-rbac-proxy-node" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012500 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012506 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="northd" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.012653 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" containerName="ovnkube-controller" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.014279 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.014371 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/2.log" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.014823 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/1.log" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.014856 4889 generic.go:334] "Generic (PLEG): container finished" podID="68ddfdcf-000e-45ae-a737-d3dd28115d5b" containerID="52ae5f5374660ca9bd0699777aa53aaebd429485f4384242509e782ae0c613a9" exitCode=2 Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.014913 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.014920 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vtjm7" event={"ID":"68ddfdcf-000e-45ae-a737-d3dd28115d5b","Type":"ContainerDied","Data":"52ae5f5374660ca9bd0699777aa53aaebd429485f4384242509e782ae0c613a9"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.014951 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b"} Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.015245 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.015333 4889 scope.go:117] "RemoveContainer" containerID="52ae5f5374660ca9bd0699777aa53aaebd429485f4384242509e782ae0c613a9" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.015566 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-vtjm7_openshift-multus(68ddfdcf-000e-45ae-a737-d3dd28115d5b)\"" pod="openshift-multus/multus-vtjm7" podUID="68ddfdcf-000e-45ae-a737-d3dd28115d5b" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.043141 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.057521 4889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(f055f6f9cff885fc3c8b5a9b3219ca6675e195c93b918c76b5c84e334d0379ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.057580 4889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(f055f6f9cff885fc3c8b5a9b3219ca6675e195c93b918c76b5c84e334d0379ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.057608 4889 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(f055f6f9cff885fc3c8b5a9b3219ca6675e195c93b918c76b5c84e334d0379ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.057651 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager(4e499ea2-0c14-4895-bff8-a67adcd6b0c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager(4e499ea2-0c14-4895-bff8-a67adcd6b0c6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(f055f6f9cff885fc3c8b5a9b3219ca6675e195c93b918c76b5c84e334d0379ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" podUID="4e499ea2-0c14-4895-bff8-a67adcd6b0c6" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.071471 4889 scope.go:117] "RemoveContainer" containerID="f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074406 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-slash\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074439 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-var-lib-openvswitch\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074591 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-netns\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074618 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-kubelet\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074635 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-ovn-kubernetes\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074670 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-config\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074687 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-systemd\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074687 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074697 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074725 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-node-log\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074760 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-node-log" (OuterVolumeSpecName: "node-log") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074760 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074780 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074816 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074856 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-openvswitch\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074883 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-ovn\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074912 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvxwh\" (UniqueName: \"kubernetes.io/projected/6de1d273-3dcf-4772-bc88-323f46e1ead5-kube-api-access-tvxwh\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074926 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-systemd-units\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074943 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-netd\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.074994 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovn-node-metrics-cert\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075027 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-script-lib\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075041 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-log-socket\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075059 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-bin\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075072 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-etc-openvswitch\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075093 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-env-overrides\") pod \"6de1d273-3dcf-4772-bc88-323f46e1ead5\" (UID: \"6de1d273-3dcf-4772-bc88-323f46e1ead5\") " Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075227 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075290 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075275 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075228 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-run-netns\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075351 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075351 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075382 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-ovnkube-config\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075458 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-cni-bin\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075497 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-ovnkube-script-lib\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075557 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-kubelet\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075580 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-etc-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075633 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-slash\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075388 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075360 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075425 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-log-socket" (OuterVolumeSpecName: "log-socket") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075470 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075835 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075865 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075943 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-log-socket\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.075998 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-ovn\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076034 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-var-lib-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076052 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076069 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xtm\" (UniqueName: \"kubernetes.io/projected/f049022d-736b-41aa-8ffa-eff4b50991c5-kube-api-access-69xtm\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076087 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076108 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-systemd\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076125 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f049022d-736b-41aa-8ffa-eff4b50991c5-ovn-node-metrics-cert\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076145 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076160 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-cni-netd\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076176 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-node-log\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076214 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-systemd-units\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076228 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-env-overrides\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076262 4889 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076272 4889 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076281 4889 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076289 4889 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076296 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076304 4889 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076314 4889 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076323 4889 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076331 4889 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076340 4889 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076348 4889 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076358 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076519 4889 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076547 4889 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076565 4889 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.076578 4889 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de1d273-3dcf-4772-bc88-323f46e1ead5-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.077236 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-slash" (OuterVolumeSpecName: "host-slash") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.080760 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6de1d273-3dcf-4772-bc88-323f46e1ead5-kube-api-access-tvxwh" (OuterVolumeSpecName: "kube-api-access-tvxwh") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "kube-api-access-tvxwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.080934 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.086239 4889 scope.go:117] "RemoveContainer" containerID="d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.087458 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6de1d273-3dcf-4772-bc88-323f46e1ead5" (UID: "6de1d273-3dcf-4772-bc88-323f46e1ead5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.100568 4889 scope.go:117] "RemoveContainer" containerID="e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.115276 4889 scope.go:117] "RemoveContainer" containerID="3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.131008 4889 scope.go:117] "RemoveContainer" containerID="0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.148553 4889 scope.go:117] "RemoveContainer" containerID="9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.163458 4889 scope.go:117] "RemoveContainer" containerID="60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177533 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177601 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-systemd\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177634 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f049022d-736b-41aa-8ffa-eff4b50991c5-ovn-node-metrics-cert\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177661 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177681 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-cni-netd\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177727 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-node-log\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177725 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177772 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-systemd-units\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177826 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-env-overrides\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177819 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177850 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-run-netns\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177873 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-ovnkube-config\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177935 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-systemd\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177939 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-cni-bin\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.177991 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-cni-bin\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178036 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-ovnkube-script-lib\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178065 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-cni-netd\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178098 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-systemd-units\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178109 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-node-log\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178730 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-kubelet\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178813 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-etc-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178858 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-slash\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178915 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-log-socket\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.178974 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-ovn\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179042 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-var-lib-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179071 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179077 4889 scope.go:117] "RemoveContainer" containerID="8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179124 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69xtm\" (UniqueName: \"kubernetes.io/projected/f049022d-736b-41aa-8ffa-eff4b50991c5-kube-api-access-69xtm\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179202 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-slash\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179230 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-log-socket\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179242 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-run-netns\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179274 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-ovn\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179281 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvxwh\" (UniqueName: \"kubernetes.io/projected/6de1d273-3dcf-4772-bc88-323f46e1ead5-kube-api-access-tvxwh\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179331 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-var-lib-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179318 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-etc-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179281 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-host-kubelet\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179425 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f049022d-736b-41aa-8ffa-eff4b50991c5-run-openvswitch\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179494 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de1d273-3dcf-4772-bc88-323f46e1ead5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179516 4889 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.179529 4889 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6de1d273-3dcf-4772-bc88-323f46e1ead5-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.180128 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-ovnkube-config\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.180297 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-env-overrides\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.180385 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f049022d-736b-41aa-8ffa-eff4b50991c5-ovnkube-script-lib\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.185905 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f049022d-736b-41aa-8ffa-eff4b50991c5-ovn-node-metrics-cert\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.197161 4889 scope.go:117] "RemoveContainer" containerID="14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.197636 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": container with ID starting with 14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996 not found: ID does not exist" containerID="14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.197717 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} err="failed to get container status \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": rpc error: code = NotFound desc = could not find container \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": container with ID starting with 14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.197773 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.197931 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69xtm\" (UniqueName: \"kubernetes.io/projected/f049022d-736b-41aa-8ffa-eff4b50991c5-kube-api-access-69xtm\") pod \"ovnkube-node-lxpz8\" (UID: \"f049022d-736b-41aa-8ffa-eff4b50991c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.198181 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": container with ID starting with 33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e not found: ID does not exist" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.198208 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} err="failed to get container status \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": rpc error: code = NotFound desc = could not find container \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": container with ID starting with 33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.198225 4889 scope.go:117] "RemoveContainer" containerID="f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.198563 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": container with ID starting with f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180 not found: ID does not exist" containerID="f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.198622 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} err="failed to get container status \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": rpc error: code = NotFound desc = could not find container \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": container with ID starting with f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.198649 4889 scope.go:117] "RemoveContainer" containerID="d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.199373 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": container with ID starting with d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266 not found: ID does not exist" containerID="d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.199414 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} err="failed to get container status \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": rpc error: code = NotFound desc = could not find container \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": container with ID starting with d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.199439 4889 scope.go:117] "RemoveContainer" containerID="e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.199763 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": container with ID starting with e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e not found: ID does not exist" containerID="e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.199810 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} err="failed to get container status \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": rpc error: code = NotFound desc = could not find container \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": container with ID starting with e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.199840 4889 scope.go:117] "RemoveContainer" containerID="3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.200096 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": container with ID starting with 3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25 not found: ID does not exist" containerID="3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.200129 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} err="failed to get container status \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": rpc error: code = NotFound desc = could not find container \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": container with ID starting with 3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.200154 4889 scope.go:117] "RemoveContainer" containerID="0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.200357 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": container with ID starting with 0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd not found: ID does not exist" containerID="0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.200393 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} err="failed to get container status \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": rpc error: code = NotFound desc = could not find container \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": container with ID starting with 0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.200413 4889 scope.go:117] "RemoveContainer" containerID="9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.200828 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": container with ID starting with 9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a not found: ID does not exist" containerID="9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.200856 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} err="failed to get container status \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": rpc error: code = NotFound desc = could not find container \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": container with ID starting with 9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.200874 4889 scope.go:117] "RemoveContainer" containerID="60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.201085 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": container with ID starting with 60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f not found: ID does not exist" containerID="60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.201143 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} err="failed to get container status \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": rpc error: code = NotFound desc = could not find container \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": container with ID starting with 60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.201170 4889 scope.go:117] "RemoveContainer" containerID="8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa" Nov 28 07:00:01 crc kubenswrapper[4889]: E1128 07:00:01.201469 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": container with ID starting with 8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa not found: ID does not exist" containerID="8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.201499 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} err="failed to get container status \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": rpc error: code = NotFound desc = could not find container \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": container with ID starting with 8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.201518 4889 scope.go:117] "RemoveContainer" containerID="14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.201856 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} err="failed to get container status \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": rpc error: code = NotFound desc = could not find container \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": container with ID starting with 14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.201906 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.202202 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} err="failed to get container status \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": rpc error: code = NotFound desc = could not find container \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": container with ID starting with 33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.202229 4889 scope.go:117] "RemoveContainer" containerID="f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.202676 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} err="failed to get container status \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": rpc error: code = NotFound desc = could not find container \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": container with ID starting with f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.202731 4889 scope.go:117] "RemoveContainer" containerID="d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.202972 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} err="failed to get container status \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": rpc error: code = NotFound desc = could not find container \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": container with ID starting with d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.203000 4889 scope.go:117] "RemoveContainer" containerID="e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.203249 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} err="failed to get container status \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": rpc error: code = NotFound desc = could not find container \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": container with ID starting with e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.203279 4889 scope.go:117] "RemoveContainer" containerID="3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.203569 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} err="failed to get container status \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": rpc error: code = NotFound desc = could not find container \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": container with ID starting with 3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.203601 4889 scope.go:117] "RemoveContainer" containerID="0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.203912 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} err="failed to get container status \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": rpc error: code = NotFound desc = could not find container \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": container with ID starting with 0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.203947 4889 scope.go:117] "RemoveContainer" containerID="9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.204271 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} err="failed to get container status \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": rpc error: code = NotFound desc = could not find container \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": container with ID starting with 9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.204294 4889 scope.go:117] "RemoveContainer" containerID="60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.204600 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} err="failed to get container status \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": rpc error: code = NotFound desc = could not find container \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": container with ID starting with 60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.204633 4889 scope.go:117] "RemoveContainer" containerID="8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.204922 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} err="failed to get container status \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": rpc error: code = NotFound desc = could not find container \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": container with ID starting with 8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.204958 4889 scope.go:117] "RemoveContainer" containerID="14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.205173 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} err="failed to get container status \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": rpc error: code = NotFound desc = could not find container \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": container with ID starting with 14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.205193 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.205457 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} err="failed to get container status \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": rpc error: code = NotFound desc = could not find container \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": container with ID starting with 33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.205486 4889 scope.go:117] "RemoveContainer" containerID="f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.205826 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} err="failed to get container status \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": rpc error: code = NotFound desc = could not find container \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": container with ID starting with f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.205862 4889 scope.go:117] "RemoveContainer" containerID="d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.206142 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} err="failed to get container status \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": rpc error: code = NotFound desc = could not find container \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": container with ID starting with d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.206171 4889 scope.go:117] "RemoveContainer" containerID="e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.206571 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} err="failed to get container status \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": rpc error: code = NotFound desc = could not find container \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": container with ID starting with e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.206596 4889 scope.go:117] "RemoveContainer" containerID="3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.206910 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} err="failed to get container status \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": rpc error: code = NotFound desc = could not find container \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": container with ID starting with 3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.206930 4889 scope.go:117] "RemoveContainer" containerID="0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.207189 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} err="failed to get container status \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": rpc error: code = NotFound desc = could not find container \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": container with ID starting with 0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.207209 4889 scope.go:117] "RemoveContainer" containerID="9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.207475 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} err="failed to get container status \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": rpc error: code = NotFound desc = could not find container \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": container with ID starting with 9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.207498 4889 scope.go:117] "RemoveContainer" containerID="60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.207769 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} err="failed to get container status \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": rpc error: code = NotFound desc = could not find container \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": container with ID starting with 60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.207790 4889 scope.go:117] "RemoveContainer" containerID="8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.207999 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} err="failed to get container status \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": rpc error: code = NotFound desc = could not find container \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": container with ID starting with 8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.208019 4889 scope.go:117] "RemoveContainer" containerID="14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.208265 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} err="failed to get container status \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": rpc error: code = NotFound desc = could not find container \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": container with ID starting with 14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.208290 4889 scope.go:117] "RemoveContainer" containerID="33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.208565 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e"} err="failed to get container status \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": rpc error: code = NotFound desc = could not find container \"33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e\": container with ID starting with 33fbf17fcc68896db95d945a921911844f6f23268efc2ac64fdf922a717a0c9e not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.208591 4889 scope.go:117] "RemoveContainer" containerID="f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.208812 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180"} err="failed to get container status \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": rpc error: code = NotFound desc = could not find container \"f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180\": container with ID starting with f366bbaefa7f1a2a639c7d6c764110166090bc808dc4c94a99442bb7a523d180 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.208840 4889 scope.go:117] "RemoveContainer" containerID="d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.209057 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266"} err="failed to get container status \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": rpc error: code = NotFound desc = could not find container \"d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266\": container with ID starting with d24f0aadb3fbf04e4595733814f38a8c6a1a7110a87f8ac3531a918b3f03a266 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.209082 4889 scope.go:117] "RemoveContainer" containerID="e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.209300 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e"} err="failed to get container status \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": rpc error: code = NotFound desc = could not find container \"e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e\": container with ID starting with e31f7adbe5a662a3db2c5590ba15672444a99649752d63f7fa1ca3f394e9b73e not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.209321 4889 scope.go:117] "RemoveContainer" containerID="3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.209576 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25"} err="failed to get container status \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": rpc error: code = NotFound desc = could not find container \"3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25\": container with ID starting with 3cd6e8872c091e8cf84ca475b7505cbd34e1b6f679a97d3f39e47679e2a3eb25 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.209612 4889 scope.go:117] "RemoveContainer" containerID="0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.209916 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd"} err="failed to get container status \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": rpc error: code = NotFound desc = could not find container \"0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd\": container with ID starting with 0ce27cb16c8365fd8b944ad67ad2afbbe58a8c7be76b42df78ba1f98899ed4cd not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.209937 4889 scope.go:117] "RemoveContainer" containerID="9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.210145 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a"} err="failed to get container status \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": rpc error: code = NotFound desc = could not find container \"9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a\": container with ID starting with 9cf070c09b99dd6594eafe7c59206547331d1af121c9bdabb61311259d237d9a not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.210164 4889 scope.go:117] "RemoveContainer" containerID="60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.210413 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f"} err="failed to get container status \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": rpc error: code = NotFound desc = could not find container \"60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f\": container with ID starting with 60787d6c02738f992012d25b246d743e3fdca2e6b11861e8c3fd63bdb06cb74f not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.210436 4889 scope.go:117] "RemoveContainer" containerID="8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.210687 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa"} err="failed to get container status \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": rpc error: code = NotFound desc = could not find container \"8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa\": container with ID starting with 8c55bdb9ed471e1d6030dad74e551c9c90636471e7c407848e1584db70946eaa not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.210733 4889 scope.go:117] "RemoveContainer" containerID="14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.210958 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996"} err="failed to get container status \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": rpc error: code = NotFound desc = could not find container \"14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996\": container with ID starting with 14db4f90b14fb226cf33669ad3b012f7e39440e9815310448a1f66adbbcfd996 not found: ID does not exist" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.329950 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.342325 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2l6bn"] Nov 28 07:00:01 crc kubenswrapper[4889]: I1128 07:00:01.343462 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2l6bn"] Nov 28 07:00:02 crc kubenswrapper[4889]: I1128 07:00:02.022375 4889 generic.go:334] "Generic (PLEG): container finished" podID="f049022d-736b-41aa-8ffa-eff4b50991c5" containerID="41b03d3fe5dadce5b7aa14c548582163e38035b35c9b1b8b77b14e4fcda1c17e" exitCode=0 Nov 28 07:00:02 crc kubenswrapper[4889]: I1128 07:00:02.022486 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerDied","Data":"41b03d3fe5dadce5b7aa14c548582163e38035b35c9b1b8b77b14e4fcda1c17e"} Nov 28 07:00:02 crc kubenswrapper[4889]: I1128 07:00:02.022672 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"3f2520247695f5866c43645c5a5bbe9adfaaf96273d31c2d5157f6c9d4b49545"} Nov 28 07:00:03 crc kubenswrapper[4889]: I1128 07:00:03.033581 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"e62d92cde6ce294bcfc9071df2cff06ebecc367c4bf9d64945a97e6817f377a7"} Nov 28 07:00:03 crc kubenswrapper[4889]: I1128 07:00:03.035105 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"6d724ac536e1f038e2be226b1464fee614ab7ca2c57169f7b3f98dd86d223034"} Nov 28 07:00:03 crc kubenswrapper[4889]: I1128 07:00:03.035196 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"10bf4262890bb7dde3bcf190e5700c6310118310bb6ca3ea6abd52b074358e08"} Nov 28 07:00:03 crc kubenswrapper[4889]: I1128 07:00:03.035277 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"42cce3c432c63d659a72cd41d82876cd56fd26d0850116b8442b944fd71a6d94"} Nov 28 07:00:03 crc kubenswrapper[4889]: I1128 07:00:03.035354 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"8bdf8e701704b24973a38091c6137b55bcc5c6d08b497ed6434caf68964bbef5"} Nov 28 07:00:03 crc kubenswrapper[4889]: I1128 07:00:03.035440 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"c68ed4da3738a2f6fa84e4e14f15c5d45454c3fcf12f0beb01877bae3bb3124a"} Nov 28 07:00:03 crc kubenswrapper[4889]: I1128 07:00:03.340644 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6de1d273-3dcf-4772-bc88-323f46e1ead5" path="/var/lib/kubelet/pods/6de1d273-3dcf-4772-bc88-323f46e1ead5/volumes" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.046216 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"4f2dc24dccf08c3240c86dfb446044adf233b1f7035c94abe4ed32f45bd89abe"} Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.534284 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kmjxb"] Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.535786 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.537973 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.538189 4889 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-t2hrm" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.541738 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.541797 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.630535 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7r6\" (UniqueName: \"kubernetes.io/projected/8de10e3d-446c-44e7-8d7a-3a933a566a21-kube-api-access-hw7r6\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.630776 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8de10e3d-446c-44e7-8d7a-3a933a566a21-crc-storage\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.630931 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8de10e3d-446c-44e7-8d7a-3a933a566a21-node-mnt\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.732148 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8de10e3d-446c-44e7-8d7a-3a933a566a21-crc-storage\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.732229 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8de10e3d-446c-44e7-8d7a-3a933a566a21-node-mnt\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.732272 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7r6\" (UniqueName: \"kubernetes.io/projected/8de10e3d-446c-44e7-8d7a-3a933a566a21-kube-api-access-hw7r6\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.732648 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8de10e3d-446c-44e7-8d7a-3a933a566a21-node-mnt\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.732910 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8de10e3d-446c-44e7-8d7a-3a933a566a21-crc-storage\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.751450 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7r6\" (UniqueName: \"kubernetes.io/projected/8de10e3d-446c-44e7-8d7a-3a933a566a21-kube-api-access-hw7r6\") pod \"crc-storage-crc-kmjxb\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: I1128 07:00:05.865511 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: E1128 07:00:05.888316 4889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kmjxb_crc-storage_8de10e3d-446c-44e7-8d7a-3a933a566a21_0(2eab74e3aeda083f2204d9007f4309498ad73b47f015de6d859fcf9ec668050b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 07:00:05 crc kubenswrapper[4889]: E1128 07:00:05.888374 4889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kmjxb_crc-storage_8de10e3d-446c-44e7-8d7a-3a933a566a21_0(2eab74e3aeda083f2204d9007f4309498ad73b47f015de6d859fcf9ec668050b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: E1128 07:00:05.888396 4889 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kmjxb_crc-storage_8de10e3d-446c-44e7-8d7a-3a933a566a21_0(2eab74e3aeda083f2204d9007f4309498ad73b47f015de6d859fcf9ec668050b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:05 crc kubenswrapper[4889]: E1128 07:00:05.888445 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-kmjxb_crc-storage(8de10e3d-446c-44e7-8d7a-3a933a566a21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-kmjxb_crc-storage(8de10e3d-446c-44e7-8d7a-3a933a566a21)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kmjxb_crc-storage_8de10e3d-446c-44e7-8d7a-3a933a566a21_0(2eab74e3aeda083f2204d9007f4309498ad73b47f015de6d859fcf9ec668050b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-kmjxb" podUID="8de10e3d-446c-44e7-8d7a-3a933a566a21" Nov 28 07:00:07 crc kubenswrapper[4889]: I1128 07:00:07.869916 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kmjxb"] Nov 28 07:00:07 crc kubenswrapper[4889]: I1128 07:00:07.870505 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:07 crc kubenswrapper[4889]: I1128 07:00:07.870897 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:07 crc kubenswrapper[4889]: E1128 07:00:07.892838 4889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kmjxb_crc-storage_8de10e3d-446c-44e7-8d7a-3a933a566a21_0(c0f8450352fbaceda7704cb8b0506e7b7f04d2390bd7b5827a3a411f8990e716): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 07:00:07 crc kubenswrapper[4889]: E1128 07:00:07.892890 4889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kmjxb_crc-storage_8de10e3d-446c-44e7-8d7a-3a933a566a21_0(c0f8450352fbaceda7704cb8b0506e7b7f04d2390bd7b5827a3a411f8990e716): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:07 crc kubenswrapper[4889]: E1128 07:00:07.892908 4889 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kmjxb_crc-storage_8de10e3d-446c-44e7-8d7a-3a933a566a21_0(c0f8450352fbaceda7704cb8b0506e7b7f04d2390bd7b5827a3a411f8990e716): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:07 crc kubenswrapper[4889]: E1128 07:00:07.892951 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-kmjxb_crc-storage(8de10e3d-446c-44e7-8d7a-3a933a566a21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-kmjxb_crc-storage(8de10e3d-446c-44e7-8d7a-3a933a566a21)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kmjxb_crc-storage_8de10e3d-446c-44e7-8d7a-3a933a566a21_0(c0f8450352fbaceda7704cb8b0506e7b7f04d2390bd7b5827a3a411f8990e716): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-kmjxb" podUID="8de10e3d-446c-44e7-8d7a-3a933a566a21" Nov 28 07:00:08 crc kubenswrapper[4889]: I1128 07:00:08.064974 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" event={"ID":"f049022d-736b-41aa-8ffa-eff4b50991c5","Type":"ContainerStarted","Data":"7d0268ff5ccd056084cc24c4bc29b89a7806d7f4304d9f93b1b52eadc3034224"} Nov 28 07:00:08 crc kubenswrapper[4889]: I1128 07:00:08.065350 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:08 crc kubenswrapper[4889]: I1128 07:00:08.093717 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:08 crc kubenswrapper[4889]: I1128 07:00:08.134271 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" podStartSLOduration=8.134251231 podStartE2EDuration="8.134251231s" podCreationTimestamp="2025-11-28 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:00:08.104243647 +0000 UTC m=+731.074477822" watchObservedRunningTime="2025-11-28 07:00:08.134251231 +0000 UTC m=+731.104485396" Nov 28 07:00:09 crc kubenswrapper[4889]: I1128 07:00:09.083971 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:09 crc kubenswrapper[4889]: I1128 07:00:09.084323 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:09 crc kubenswrapper[4889]: I1128 07:00:09.115661 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:12 crc kubenswrapper[4889]: I1128 07:00:12.331180 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:12 crc kubenswrapper[4889]: I1128 07:00:12.331913 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:12 crc kubenswrapper[4889]: E1128 07:00:12.378675 4889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(ac0ca4812ca8a2a88712129e24edcf8d5f5bfbc76333b7f49ba4d8687c2383f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 07:00:12 crc kubenswrapper[4889]: E1128 07:00:12.378863 4889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(ac0ca4812ca8a2a88712129e24edcf8d5f5bfbc76333b7f49ba4d8687c2383f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:12 crc kubenswrapper[4889]: E1128 07:00:12.378918 4889 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(ac0ca4812ca8a2a88712129e24edcf8d5f5bfbc76333b7f49ba4d8687c2383f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:12 crc kubenswrapper[4889]: E1128 07:00:12.379122 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager(4e499ea2-0c14-4895-bff8-a67adcd6b0c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager(4e499ea2-0c14-4895-bff8-a67adcd6b0c6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29405220-rhd4d_openshift-operator-lifecycle-manager_4e499ea2-0c14-4895-bff8-a67adcd6b0c6_0(ac0ca4812ca8a2a88712129e24edcf8d5f5bfbc76333b7f49ba4d8687c2383f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" podUID="4e499ea2-0c14-4895-bff8-a67adcd6b0c6" Nov 28 07:00:15 crc kubenswrapper[4889]: I1128 07:00:15.332080 4889 scope.go:117] "RemoveContainer" containerID="52ae5f5374660ca9bd0699777aa53aaebd429485f4384242509e782ae0c613a9" Nov 28 07:00:16 crc kubenswrapper[4889]: I1128 07:00:16.126046 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/2.log" Nov 28 07:00:16 crc kubenswrapper[4889]: I1128 07:00:16.126691 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/1.log" Nov 28 07:00:16 crc kubenswrapper[4889]: I1128 07:00:16.126769 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vtjm7" event={"ID":"68ddfdcf-000e-45ae-a737-d3dd28115d5b","Type":"ContainerStarted","Data":"f35a8d9f8c25b6a35368b1ef2615cb6398e0e98eadd970cd047b60c3af425cf6"} Nov 28 07:00:20 crc kubenswrapper[4889]: I1128 07:00:20.330874 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:20 crc kubenswrapper[4889]: I1128 07:00:20.331918 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:20 crc kubenswrapper[4889]: I1128 07:00:20.584492 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kmjxb"] Nov 28 07:00:20 crc kubenswrapper[4889]: I1128 07:00:20.596940 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:00:21 crc kubenswrapper[4889]: I1128 07:00:21.170451 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kmjxb" event={"ID":"8de10e3d-446c-44e7-8d7a-3a933a566a21","Type":"ContainerStarted","Data":"95d3f42043533b8c7a4a8d66282ac8af3dee3ed919e99203843fc68c2897ad41"} Nov 28 07:00:24 crc kubenswrapper[4889]: I1128 07:00:24.197945 4889 generic.go:334] "Generic (PLEG): container finished" podID="8de10e3d-446c-44e7-8d7a-3a933a566a21" containerID="2e73b9a8dab3ca689fd8d5e441b6553c605e41a89eafbfa029416149059f44fd" exitCode=0 Nov 28 07:00:24 crc kubenswrapper[4889]: I1128 07:00:24.198416 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kmjxb" event={"ID":"8de10e3d-446c-44e7-8d7a-3a933a566a21","Type":"ContainerDied","Data":"2e73b9a8dab3ca689fd8d5e441b6553c605e41a89eafbfa029416149059f44fd"} Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.432973 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.492910 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8de10e3d-446c-44e7-8d7a-3a933a566a21-node-mnt\") pod \"8de10e3d-446c-44e7-8d7a-3a933a566a21\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.493034 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8de10e3d-446c-44e7-8d7a-3a933a566a21-crc-storage\") pod \"8de10e3d-446c-44e7-8d7a-3a933a566a21\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.493072 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw7r6\" (UniqueName: \"kubernetes.io/projected/8de10e3d-446c-44e7-8d7a-3a933a566a21-kube-api-access-hw7r6\") pod \"8de10e3d-446c-44e7-8d7a-3a933a566a21\" (UID: \"8de10e3d-446c-44e7-8d7a-3a933a566a21\") " Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.493079 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de10e3d-446c-44e7-8d7a-3a933a566a21-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "8de10e3d-446c-44e7-8d7a-3a933a566a21" (UID: "8de10e3d-446c-44e7-8d7a-3a933a566a21"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.493319 4889 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8de10e3d-446c-44e7-8d7a-3a933a566a21-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.497799 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de10e3d-446c-44e7-8d7a-3a933a566a21-kube-api-access-hw7r6" (OuterVolumeSpecName: "kube-api-access-hw7r6") pod "8de10e3d-446c-44e7-8d7a-3a933a566a21" (UID: "8de10e3d-446c-44e7-8d7a-3a933a566a21"). InnerVolumeSpecName "kube-api-access-hw7r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.505999 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de10e3d-446c-44e7-8d7a-3a933a566a21-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "8de10e3d-446c-44e7-8d7a-3a933a566a21" (UID: "8de10e3d-446c-44e7-8d7a-3a933a566a21"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.594360 4889 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8de10e3d-446c-44e7-8d7a-3a933a566a21-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:25 crc kubenswrapper[4889]: I1128 07:00:25.594815 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw7r6\" (UniqueName: \"kubernetes.io/projected/8de10e3d-446c-44e7-8d7a-3a933a566a21-kube-api-access-hw7r6\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:26 crc kubenswrapper[4889]: I1128 07:00:26.210256 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kmjxb" event={"ID":"8de10e3d-446c-44e7-8d7a-3a933a566a21","Type":"ContainerDied","Data":"95d3f42043533b8c7a4a8d66282ac8af3dee3ed919e99203843fc68c2897ad41"} Nov 28 07:00:26 crc kubenswrapper[4889]: I1128 07:00:26.210297 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d3f42043533b8c7a4a8d66282ac8af3dee3ed919e99203843fc68c2897ad41" Nov 28 07:00:26 crc kubenswrapper[4889]: I1128 07:00:26.210340 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kmjxb" Nov 28 07:00:26 crc kubenswrapper[4889]: I1128 07:00:26.330925 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:26 crc kubenswrapper[4889]: I1128 07:00:26.331440 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:26 crc kubenswrapper[4889]: I1128 07:00:26.534750 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d"] Nov 28 07:00:26 crc kubenswrapper[4889]: W1128 07:00:26.542834 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e499ea2_0c14_4895_bff8_a67adcd6b0c6.slice/crio-6b2eec2a5a61f5cf56bdb8e1657c9265c8d243e8462796d2bf5241627c15c5d5 WatchSource:0}: Error finding container 6b2eec2a5a61f5cf56bdb8e1657c9265c8d243e8462796d2bf5241627c15c5d5: Status 404 returned error can't find the container with id 6b2eec2a5a61f5cf56bdb8e1657c9265c8d243e8462796d2bf5241627c15c5d5 Nov 28 07:00:27 crc kubenswrapper[4889]: I1128 07:00:27.218674 4889 generic.go:334] "Generic (PLEG): container finished" podID="4e499ea2-0c14-4895-bff8-a67adcd6b0c6" containerID="e285756ac1997e200384971f98f569182b98dfd85ae6a80fd8ef1239c5b25992" exitCode=0 Nov 28 07:00:27 crc kubenswrapper[4889]: I1128 07:00:27.218797 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" event={"ID":"4e499ea2-0c14-4895-bff8-a67adcd6b0c6","Type":"ContainerDied","Data":"e285756ac1997e200384971f98f569182b98dfd85ae6a80fd8ef1239c5b25992"} Nov 28 07:00:27 crc kubenswrapper[4889]: I1128 07:00:27.219159 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" event={"ID":"4e499ea2-0c14-4895-bff8-a67adcd6b0c6","Type":"ContainerStarted","Data":"6b2eec2a5a61f5cf56bdb8e1657c9265c8d243e8462796d2bf5241627c15c5d5"} Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.424236 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.533281 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-config-volume\") pod \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.533382 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q24zf\" (UniqueName: \"kubernetes.io/projected/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-kube-api-access-q24zf\") pod \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.533433 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-secret-volume\") pod \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\" (UID: \"4e499ea2-0c14-4895-bff8-a67adcd6b0c6\") " Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.534638 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e499ea2-0c14-4895-bff8-a67adcd6b0c6" (UID: "4e499ea2-0c14-4895-bff8-a67adcd6b0c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.538758 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4e499ea2-0c14-4895-bff8-a67adcd6b0c6" (UID: "4e499ea2-0c14-4895-bff8-a67adcd6b0c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.539122 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-kube-api-access-q24zf" (OuterVolumeSpecName: "kube-api-access-q24zf") pod "4e499ea2-0c14-4895-bff8-a67adcd6b0c6" (UID: "4e499ea2-0c14-4895-bff8-a67adcd6b0c6"). InnerVolumeSpecName "kube-api-access-q24zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.635155 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q24zf\" (UniqueName: \"kubernetes.io/projected/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-kube-api-access-q24zf\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.635193 4889 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:28 crc kubenswrapper[4889]: I1128 07:00:28.635207 4889 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e499ea2-0c14-4895-bff8-a67adcd6b0c6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:29 crc kubenswrapper[4889]: I1128 07:00:29.230926 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" event={"ID":"4e499ea2-0c14-4895-bff8-a67adcd6b0c6","Type":"ContainerDied","Data":"6b2eec2a5a61f5cf56bdb8e1657c9265c8d243e8462796d2bf5241627c15c5d5"} Nov 28 07:00:29 crc kubenswrapper[4889]: I1128 07:00:29.231220 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b2eec2a5a61f5cf56bdb8e1657c9265c8d243e8462796d2bf5241627c15c5d5" Nov 28 07:00:29 crc kubenswrapper[4889]: I1128 07:00:29.230981 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-rhd4d" Nov 28 07:00:30 crc kubenswrapper[4889]: I1128 07:00:30.087107 4889 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 07:00:31 crc kubenswrapper[4889]: I1128 07:00:31.361448 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lxpz8" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.605937 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb"] Nov 28 07:00:32 crc kubenswrapper[4889]: E1128 07:00:32.607837 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e499ea2-0c14-4895-bff8-a67adcd6b0c6" containerName="collect-profiles" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.607940 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e499ea2-0c14-4895-bff8-a67adcd6b0c6" containerName="collect-profiles" Nov 28 07:00:32 crc kubenswrapper[4889]: E1128 07:00:32.608017 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de10e3d-446c-44e7-8d7a-3a933a566a21" containerName="storage" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.608035 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de10e3d-446c-44e7-8d7a-3a933a566a21" containerName="storage" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.608583 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de10e3d-446c-44e7-8d7a-3a933a566a21" containerName="storage" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.608617 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e499ea2-0c14-4895-bff8-a67adcd6b0c6" containerName="collect-profiles" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.611261 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.620385 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.637395 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb"] Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.784403 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.784550 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl8n8\" (UniqueName: \"kubernetes.io/projected/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-kube-api-access-hl8n8\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.784622 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.885420 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.885489 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl8n8\" (UniqueName: \"kubernetes.io/projected/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-kube-api-access-hl8n8\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.885529 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.886013 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.886074 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.912279 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl8n8\" (UniqueName: \"kubernetes.io/projected/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-kube-api-access-hl8n8\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:32 crc kubenswrapper[4889]: I1128 07:00:32.933075 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:33 crc kubenswrapper[4889]: I1128 07:00:33.145878 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb"] Nov 28 07:00:33 crc kubenswrapper[4889]: I1128 07:00:33.251688 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" event={"ID":"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48","Type":"ContainerStarted","Data":"701256fb14de837cdaedf210ddd29390b08c44b71b80b05a132b814f12a936c2"} Nov 28 07:00:34 crc kubenswrapper[4889]: I1128 07:00:34.260118 4889 generic.go:334] "Generic (PLEG): container finished" podID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerID="5725d765eb98f7d6b5644cec834c3722371d5bfb389ba3d71a301000d9054f45" exitCode=0 Nov 28 07:00:34 crc kubenswrapper[4889]: I1128 07:00:34.260161 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" event={"ID":"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48","Type":"ContainerDied","Data":"5725d765eb98f7d6b5644cec834c3722371d5bfb389ba3d71a301000d9054f45"} Nov 28 07:00:34 crc kubenswrapper[4889]: I1128 07:00:34.960051 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c4pgs"] Nov 28 07:00:34 crc kubenswrapper[4889]: I1128 07:00:34.961629 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:34 crc kubenswrapper[4889]: I1128 07:00:34.981624 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c4pgs"] Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.114009 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-utilities\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.114538 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btp5h\" (UniqueName: \"kubernetes.io/projected/51e54d9c-6043-409d-b127-f765b8ca9c49-kube-api-access-btp5h\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.114632 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-catalog-content\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.215834 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-utilities\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.216116 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btp5h\" (UniqueName: \"kubernetes.io/projected/51e54d9c-6043-409d-b127-f765b8ca9c49-kube-api-access-btp5h\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.216225 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-catalog-content\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.216436 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-utilities\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.216979 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-catalog-content\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.239635 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btp5h\" (UniqueName: \"kubernetes.io/projected/51e54d9c-6043-409d-b127-f765b8ca9c49-kube-api-access-btp5h\") pod \"redhat-operators-c4pgs\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.319062 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:35 crc kubenswrapper[4889]: I1128 07:00:35.536925 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c4pgs"] Nov 28 07:00:36 crc kubenswrapper[4889]: I1128 07:00:36.271149 4889 generic.go:334] "Generic (PLEG): container finished" podID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerID="f436a946b8f596237643ef776e9537e392043c756866197ee4ec63f53fe7aa62" exitCode=0 Nov 28 07:00:36 crc kubenswrapper[4889]: I1128 07:00:36.271206 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" event={"ID":"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48","Type":"ContainerDied","Data":"f436a946b8f596237643ef776e9537e392043c756866197ee4ec63f53fe7aa62"} Nov 28 07:00:36 crc kubenswrapper[4889]: I1128 07:00:36.273146 4889 generic.go:334] "Generic (PLEG): container finished" podID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerID="b01ed69ebad4efeb4cc6345d1f5768a80aa5db2c8f3b4cec229c2f57bfb179a0" exitCode=0 Nov 28 07:00:36 crc kubenswrapper[4889]: I1128 07:00:36.273197 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4pgs" event={"ID":"51e54d9c-6043-409d-b127-f765b8ca9c49","Type":"ContainerDied","Data":"b01ed69ebad4efeb4cc6345d1f5768a80aa5db2c8f3b4cec229c2f57bfb179a0"} Nov 28 07:00:36 crc kubenswrapper[4889]: I1128 07:00:36.273268 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4pgs" event={"ID":"51e54d9c-6043-409d-b127-f765b8ca9c49","Type":"ContainerStarted","Data":"a4e1d6a9389885bc3b20b6b72264ae43ad50b684a799bd5566df6845051e01df"} Nov 28 07:00:37 crc kubenswrapper[4889]: I1128 07:00:37.283859 4889 generic.go:334] "Generic (PLEG): container finished" podID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerID="07f741938641fa4be60477357750f97184218cf56455193d0c246802b94f41a5" exitCode=0 Nov 28 07:00:37 crc kubenswrapper[4889]: I1128 07:00:37.283955 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" event={"ID":"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48","Type":"ContainerDied","Data":"07f741938641fa4be60477357750f97184218cf56455193d0c246802b94f41a5"} Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.292547 4889 generic.go:334] "Generic (PLEG): container finished" podID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerID="40520db339807d902dc7a03b679d43b6136b6bbd07b05aa92f8953549fc8f2b0" exitCode=0 Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.292649 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4pgs" event={"ID":"51e54d9c-6043-409d-b127-f765b8ca9c49","Type":"ContainerDied","Data":"40520db339807d902dc7a03b679d43b6136b6bbd07b05aa92f8953549fc8f2b0"} Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.543184 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.654785 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-util\") pod \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.654923 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-bundle\") pod \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.654975 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl8n8\" (UniqueName: \"kubernetes.io/projected/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-kube-api-access-hl8n8\") pod \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\" (UID: \"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48\") " Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.655515 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-bundle" (OuterVolumeSpecName: "bundle") pod "f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" (UID: "f34b29e6-fe3f-4bf4-9e80-3bd54e012e48"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.660054 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-kube-api-access-hl8n8" (OuterVolumeSpecName: "kube-api-access-hl8n8") pod "f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" (UID: "f34b29e6-fe3f-4bf4-9e80-3bd54e012e48"). InnerVolumeSpecName "kube-api-access-hl8n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.669441 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-util" (OuterVolumeSpecName: "util") pod "f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" (UID: "f34b29e6-fe3f-4bf4-9e80-3bd54e012e48"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.756680 4889 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-util\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.756785 4889 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:38 crc kubenswrapper[4889]: I1128 07:00:38.756847 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl8n8\" (UniqueName: \"kubernetes.io/projected/f34b29e6-fe3f-4bf4-9e80-3bd54e012e48-kube-api-access-hl8n8\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:39 crc kubenswrapper[4889]: I1128 07:00:39.301031 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" event={"ID":"f34b29e6-fe3f-4bf4-9e80-3bd54e012e48","Type":"ContainerDied","Data":"701256fb14de837cdaedf210ddd29390b08c44b71b80b05a132b814f12a936c2"} Nov 28 07:00:39 crc kubenswrapper[4889]: I1128 07:00:39.301359 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="701256fb14de837cdaedf210ddd29390b08c44b71b80b05a132b814f12a936c2" Nov 28 07:00:39 crc kubenswrapper[4889]: I1128 07:00:39.301051 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb" Nov 28 07:00:39 crc kubenswrapper[4889]: I1128 07:00:39.304322 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4pgs" event={"ID":"51e54d9c-6043-409d-b127-f765b8ca9c49","Type":"ContainerStarted","Data":"d5b766e85d69f9973da3af666702b17a17a302e70a61adab37811dd79ec37df4"} Nov 28 07:00:39 crc kubenswrapper[4889]: I1128 07:00:39.320464 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c4pgs" podStartSLOduration=2.804307197 podStartE2EDuration="5.320446244s" podCreationTimestamp="2025-11-28 07:00:34 +0000 UTC" firstStartedPulling="2025-11-28 07:00:36.274094832 +0000 UTC m=+759.244328987" lastFinishedPulling="2025-11-28 07:00:38.790233879 +0000 UTC m=+761.760468034" observedRunningTime="2025-11-28 07:00:39.318063133 +0000 UTC m=+762.288297288" watchObservedRunningTime="2025-11-28 07:00:39.320446244 +0000 UTC m=+762.290680409" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.078669 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw"] Nov 28 07:00:43 crc kubenswrapper[4889]: E1128 07:00:43.079391 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerName="util" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.079411 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerName="util" Nov 28 07:00:43 crc kubenswrapper[4889]: E1128 07:00:43.079432 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerName="extract" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.079443 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerName="extract" Nov 28 07:00:43 crc kubenswrapper[4889]: E1128 07:00:43.079473 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerName="pull" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.079485 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerName="pull" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.079626 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34b29e6-fe3f-4bf4-9e80-3bd54e012e48" containerName="extract" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.080280 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.083028 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.083288 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.083880 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2fzs8" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.105109 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw"] Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.205349 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffqq\" (UniqueName: \"kubernetes.io/projected/d804fabb-6387-4eef-a102-c35754398811-kube-api-access-9ffqq\") pod \"nmstate-operator-5b5b58f5c8-kbqpw\" (UID: \"d804fabb-6387-4eef-a102-c35754398811\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.306841 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffqq\" (UniqueName: \"kubernetes.io/projected/d804fabb-6387-4eef-a102-c35754398811-kube-api-access-9ffqq\") pod \"nmstate-operator-5b5b58f5c8-kbqpw\" (UID: \"d804fabb-6387-4eef-a102-c35754398811\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.327728 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffqq\" (UniqueName: \"kubernetes.io/projected/d804fabb-6387-4eef-a102-c35754398811-kube-api-access-9ffqq\") pod \"nmstate-operator-5b5b58f5c8-kbqpw\" (UID: \"d804fabb-6387-4eef-a102-c35754398811\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.400026 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw" Nov 28 07:00:43 crc kubenswrapper[4889]: I1128 07:00:43.626106 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw"] Nov 28 07:00:43 crc kubenswrapper[4889]: W1128 07:00:43.633946 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd804fabb_6387_4eef_a102_c35754398811.slice/crio-75b3a7b176b392b6ad39d4a8ab364ef077129260cb5e1533bd65c9ba3e974fdd WatchSource:0}: Error finding container 75b3a7b176b392b6ad39d4a8ab364ef077129260cb5e1533bd65c9ba3e974fdd: Status 404 returned error can't find the container with id 75b3a7b176b392b6ad39d4a8ab364ef077129260cb5e1533bd65c9ba3e974fdd Nov 28 07:00:44 crc kubenswrapper[4889]: I1128 07:00:44.329970 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw" event={"ID":"d804fabb-6387-4eef-a102-c35754398811","Type":"ContainerStarted","Data":"75b3a7b176b392b6ad39d4a8ab364ef077129260cb5e1533bd65c9ba3e974fdd"} Nov 28 07:00:45 crc kubenswrapper[4889]: I1128 07:00:45.319200 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:45 crc kubenswrapper[4889]: I1128 07:00:45.319253 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:45 crc kubenswrapper[4889]: I1128 07:00:45.358973 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:45 crc kubenswrapper[4889]: I1128 07:00:45.400217 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:47 crc kubenswrapper[4889]: I1128 07:00:47.347028 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw" event={"ID":"d804fabb-6387-4eef-a102-c35754398811","Type":"ContainerStarted","Data":"b7087b878b02ef163095be639d906f3e7a935fde42440fa162ff83329049f7cc"} Nov 28 07:00:47 crc kubenswrapper[4889]: I1128 07:00:47.363549 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-kbqpw" podStartSLOduration=1.59711088 podStartE2EDuration="4.363532852s" podCreationTimestamp="2025-11-28 07:00:43 +0000 UTC" firstStartedPulling="2025-11-28 07:00:43.635433326 +0000 UTC m=+766.605667481" lastFinishedPulling="2025-11-28 07:00:46.401855298 +0000 UTC m=+769.372089453" observedRunningTime="2025-11-28 07:00:47.36110503 +0000 UTC m=+770.331339175" watchObservedRunningTime="2025-11-28 07:00:47.363532852 +0000 UTC m=+770.333766997" Nov 28 07:00:47 crc kubenswrapper[4889]: I1128 07:00:47.955952 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c4pgs"] Nov 28 07:00:47 crc kubenswrapper[4889]: I1128 07:00:47.956253 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c4pgs" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerName="registry-server" containerID="cri-o://d5b766e85d69f9973da3af666702b17a17a302e70a61adab37811dd79ec37df4" gracePeriod=2 Nov 28 07:00:50 crc kubenswrapper[4889]: I1128 07:00:50.364497 4889 generic.go:334] "Generic (PLEG): container finished" podID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerID="d5b766e85d69f9973da3af666702b17a17a302e70a61adab37811dd79ec37df4" exitCode=0 Nov 28 07:00:50 crc kubenswrapper[4889]: I1128 07:00:50.364539 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4pgs" event={"ID":"51e54d9c-6043-409d-b127-f765b8ca9c49","Type":"ContainerDied","Data":"d5b766e85d69f9973da3af666702b17a17a302e70a61adab37811dd79ec37df4"} Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.377201 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4pgs" event={"ID":"51e54d9c-6043-409d-b127-f765b8ca9c49","Type":"ContainerDied","Data":"a4e1d6a9389885bc3b20b6b72264ae43ad50b684a799bd5566df6845051e01df"} Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.378518 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4e1d6a9389885bc3b20b6b72264ae43ad50b684a799bd5566df6845051e01df" Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.424025 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.610381 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-catalog-content\") pod \"51e54d9c-6043-409d-b127-f765b8ca9c49\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.610447 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btp5h\" (UniqueName: \"kubernetes.io/projected/51e54d9c-6043-409d-b127-f765b8ca9c49-kube-api-access-btp5h\") pod \"51e54d9c-6043-409d-b127-f765b8ca9c49\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.610523 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-utilities\") pod \"51e54d9c-6043-409d-b127-f765b8ca9c49\" (UID: \"51e54d9c-6043-409d-b127-f765b8ca9c49\") " Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.612067 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-utilities" (OuterVolumeSpecName: "utilities") pod "51e54d9c-6043-409d-b127-f765b8ca9c49" (UID: "51e54d9c-6043-409d-b127-f765b8ca9c49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.617441 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e54d9c-6043-409d-b127-f765b8ca9c49-kube-api-access-btp5h" (OuterVolumeSpecName: "kube-api-access-btp5h") pod "51e54d9c-6043-409d-b127-f765b8ca9c49" (UID: "51e54d9c-6043-409d-b127-f765b8ca9c49"). InnerVolumeSpecName "kube-api-access-btp5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.712519 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btp5h\" (UniqueName: \"kubernetes.io/projected/51e54d9c-6043-409d-b127-f765b8ca9c49-kube-api-access-btp5h\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.712569 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.753355 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51e54d9c-6043-409d-b127-f765b8ca9c49" (UID: "51e54d9c-6043-409d-b127-f765b8ca9c49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:00:51 crc kubenswrapper[4889]: I1128 07:00:51.813271 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e54d9c-6043-409d-b127-f765b8ca9c49-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.389231 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4pgs" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.437814 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c4pgs"] Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.442732 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c4pgs"] Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.828820 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc"] Nov 28 07:00:52 crc kubenswrapper[4889]: E1128 07:00:52.829040 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerName="registry-server" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.829055 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerName="registry-server" Nov 28 07:00:52 crc kubenswrapper[4889]: E1128 07:00:52.829066 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerName="extract-utilities" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.829072 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerName="extract-utilities" Nov 28 07:00:52 crc kubenswrapper[4889]: E1128 07:00:52.829081 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerName="extract-content" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.829087 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerName="extract-content" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.829173 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" containerName="registry-server" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.829691 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.832144 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xngtl" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.839116 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc"] Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.851614 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g"] Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.852289 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.857212 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.863209 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8pw8z"] Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.881126 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.926276 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g"] Nov 28 07:00:52 crc kubenswrapper[4889]: I1128 07:00:52.927283 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66w2q\" (UniqueName: \"kubernetes.io/projected/612daf5f-d1e1-4aa9-b972-9d8ab3ea3211-kube-api-access-66w2q\") pod \"nmstate-metrics-7f946cbc9-dlpnc\" (UID: \"612daf5f-d1e1-4aa9-b972-9d8ab3ea3211\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.017386 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8"] Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.018159 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.023974 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8"] Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.024317 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-mhsps" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.027966 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5557\" (UniqueName: \"kubernetes.io/projected/6b4620ef-3cb6-45a5-8787-58e934465bac-kube-api-access-z5557\") pod \"nmstate-webhook-5f6d4c5ccb-n2k6g\" (UID: \"6b4620ef-3cb6-45a5-8787-58e934465bac\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.028001 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-ovs-socket\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.028032 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66w2q\" (UniqueName: \"kubernetes.io/projected/612daf5f-d1e1-4aa9-b972-9d8ab3ea3211-kube-api-access-66w2q\") pod \"nmstate-metrics-7f946cbc9-dlpnc\" (UID: \"612daf5f-d1e1-4aa9-b972-9d8ab3ea3211\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.028080 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-dbus-socket\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.028107 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnm5f\" (UniqueName: \"kubernetes.io/projected/cf16260c-c349-4586-a9db-278bbf0cbb99-kube-api-access-wnm5f\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.028127 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6b4620ef-3cb6-45a5-8787-58e934465bac-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-n2k6g\" (UID: \"6b4620ef-3cb6-45a5-8787-58e934465bac\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.028147 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-nmstate-lock\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.029120 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.029146 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.047732 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66w2q\" (UniqueName: \"kubernetes.io/projected/612daf5f-d1e1-4aa9-b972-9d8ab3ea3211-kube-api-access-66w2q\") pod \"nmstate-metrics-7f946cbc9-dlpnc\" (UID: \"612daf5f-d1e1-4aa9-b972-9d8ab3ea3211\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.128836 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqlm\" (UniqueName: \"kubernetes.io/projected/193c905f-411f-4fa6-bbfd-83039c4d3d8b-kube-api-access-crqlm\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129247 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5557\" (UniqueName: \"kubernetes.io/projected/6b4620ef-3cb6-45a5-8787-58e934465bac-kube-api-access-z5557\") pod \"nmstate-webhook-5f6d4c5ccb-n2k6g\" (UID: \"6b4620ef-3cb6-45a5-8787-58e934465bac\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129283 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-ovs-socket\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129307 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/193c905f-411f-4fa6-bbfd-83039c4d3d8b-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129342 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/193c905f-411f-4fa6-bbfd-83039c4d3d8b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129375 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-dbus-socket\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129387 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-ovs-socket\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129402 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnm5f\" (UniqueName: \"kubernetes.io/projected/cf16260c-c349-4586-a9db-278bbf0cbb99-kube-api-access-wnm5f\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129440 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6b4620ef-3cb6-45a5-8787-58e934465bac-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-n2k6g\" (UID: \"6b4620ef-3cb6-45a5-8787-58e934465bac\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129465 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-nmstate-lock\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129538 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-nmstate-lock\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: E1128 07:00:53.129608 4889 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 28 07:00:53 crc kubenswrapper[4889]: E1128 07:00:53.129648 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b4620ef-3cb6-45a5-8787-58e934465bac-tls-key-pair podName:6b4620ef-3cb6-45a5-8787-58e934465bac nodeName:}" failed. No retries permitted until 2025-11-28 07:00:53.629633235 +0000 UTC m=+776.599867390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/6b4620ef-3cb6-45a5-8787-58e934465bac-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-n2k6g" (UID: "6b4620ef-3cb6-45a5-8787-58e934465bac") : secret "openshift-nmstate-webhook" not found Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.129674 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf16260c-c349-4586-a9db-278bbf0cbb99-dbus-socket\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.145097 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.161474 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnm5f\" (UniqueName: \"kubernetes.io/projected/cf16260c-c349-4586-a9db-278bbf0cbb99-kube-api-access-wnm5f\") pod \"nmstate-handler-8pw8z\" (UID: \"cf16260c-c349-4586-a9db-278bbf0cbb99\") " pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.165089 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5557\" (UniqueName: \"kubernetes.io/projected/6b4620ef-3cb6-45a5-8787-58e934465bac-kube-api-access-z5557\") pod \"nmstate-webhook-5f6d4c5ccb-n2k6g\" (UID: \"6b4620ef-3cb6-45a5-8787-58e934465bac\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.207956 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.230820 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/193c905f-411f-4fa6-bbfd-83039c4d3d8b-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.230874 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/193c905f-411f-4fa6-bbfd-83039c4d3d8b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.230951 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crqlm\" (UniqueName: \"kubernetes.io/projected/193c905f-411f-4fa6-bbfd-83039c4d3d8b-kube-api-access-crqlm\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.232011 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/193c905f-411f-4fa6-bbfd-83039c4d3d8b-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: E1128 07:00:53.232096 4889 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 28 07:00:53 crc kubenswrapper[4889]: E1128 07:00:53.232144 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/193c905f-411f-4fa6-bbfd-83039c4d3d8b-plugin-serving-cert podName:193c905f-411f-4fa6-bbfd-83039c4d3d8b nodeName:}" failed. No retries permitted until 2025-11-28 07:00:53.732130336 +0000 UTC m=+776.702364491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/193c905f-411f-4fa6-bbfd-83039c4d3d8b-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-2jkx8" (UID: "193c905f-411f-4fa6-bbfd-83039c4d3d8b") : secret "plugin-serving-cert" not found Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.248661 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crqlm\" (UniqueName: \"kubernetes.io/projected/193c905f-411f-4fa6-bbfd-83039c4d3d8b-kube-api-access-crqlm\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.255594 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bb776c56c-hqn2j"] Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.256238 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.303125 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb776c56c-hqn2j"] Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.338921 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e54d9c-6043-409d-b127-f765b8ca9c49" path="/var/lib/kubelet/pods/51e54d9c-6043-409d-b127-f765b8ca9c49/volumes" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.392518 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc"] Nov 28 07:00:53 crc kubenswrapper[4889]: W1128 07:00:53.402233 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod612daf5f_d1e1_4aa9_b972_9d8ab3ea3211.slice/crio-e03c84d249cb9223e9eabbb522a2dc41652c6dda10b23a434e4235e2019e84b6 WatchSource:0}: Error finding container e03c84d249cb9223e9eabbb522a2dc41652c6dda10b23a434e4235e2019e84b6: Status 404 returned error can't find the container with id e03c84d249cb9223e9eabbb522a2dc41652c6dda10b23a434e4235e2019e84b6 Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.403601 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8pw8z" event={"ID":"cf16260c-c349-4586-a9db-278bbf0cbb99","Type":"ContainerStarted","Data":"0825adcc8a63da5c0af23840b7b4d6c67ebdc6fd51dd117a02381039b1fe6fac"} Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.436157 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/412d9388-319f-4bb4-9c09-b698047ebf8d-console-oauth-config\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.436602 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-service-ca\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.436673 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/412d9388-319f-4bb4-9c09-b698047ebf8d-console-serving-cert\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.436732 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-trusted-ca-bundle\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.436766 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-console-config\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.436798 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-oauth-serving-cert\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.436818 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fwg8\" (UniqueName: \"kubernetes.io/projected/412d9388-319f-4bb4-9c09-b698047ebf8d-kube-api-access-2fwg8\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.537787 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/412d9388-319f-4bb4-9c09-b698047ebf8d-console-serving-cert\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.537846 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-trusted-ca-bundle\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.537876 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-console-config\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.537910 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-oauth-serving-cert\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.537937 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fwg8\" (UniqueName: \"kubernetes.io/projected/412d9388-319f-4bb4-9c09-b698047ebf8d-kube-api-access-2fwg8\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.537990 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/412d9388-319f-4bb4-9c09-b698047ebf8d-console-oauth-config\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.538012 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-service-ca\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.539124 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-service-ca\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.539125 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-console-config\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.539215 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-oauth-serving-cert\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.539582 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/412d9388-319f-4bb4-9c09-b698047ebf8d-trusted-ca-bundle\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.542007 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/412d9388-319f-4bb4-9c09-b698047ebf8d-console-serving-cert\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.543818 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/412d9388-319f-4bb4-9c09-b698047ebf8d-console-oauth-config\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.565473 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fwg8\" (UniqueName: \"kubernetes.io/projected/412d9388-319f-4bb4-9c09-b698047ebf8d-kube-api-access-2fwg8\") pod \"console-7bb776c56c-hqn2j\" (UID: \"412d9388-319f-4bb4-9c09-b698047ebf8d\") " pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.578285 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.639464 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6b4620ef-3cb6-45a5-8787-58e934465bac-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-n2k6g\" (UID: \"6b4620ef-3cb6-45a5-8787-58e934465bac\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.644014 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6b4620ef-3cb6-45a5-8787-58e934465bac-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-n2k6g\" (UID: \"6b4620ef-3cb6-45a5-8787-58e934465bac\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.741063 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/193c905f-411f-4fa6-bbfd-83039c4d3d8b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.744667 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/193c905f-411f-4fa6-bbfd-83039c4d3d8b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2jkx8\" (UID: \"193c905f-411f-4fa6-bbfd-83039c4d3d8b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.771848 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb776c56c-hqn2j"] Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.784997 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:53 crc kubenswrapper[4889]: I1128 07:00:53.935188 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" Nov 28 07:00:54 crc kubenswrapper[4889]: I1128 07:00:54.125066 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8"] Nov 28 07:00:54 crc kubenswrapper[4889]: I1128 07:00:54.205516 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g"] Nov 28 07:00:54 crc kubenswrapper[4889]: W1128 07:00:54.212418 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b4620ef_3cb6_45a5_8787_58e934465bac.slice/crio-1c0965d269b4fcd032622ede553b4f5c5d1b4620f171dd639e75ce37bab99b5f WatchSource:0}: Error finding container 1c0965d269b4fcd032622ede553b4f5c5d1b4620f171dd639e75ce37bab99b5f: Status 404 returned error can't find the container with id 1c0965d269b4fcd032622ede553b4f5c5d1b4620f171dd639e75ce37bab99b5f Nov 28 07:00:54 crc kubenswrapper[4889]: I1128 07:00:54.410317 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" event={"ID":"193c905f-411f-4fa6-bbfd-83039c4d3d8b","Type":"ContainerStarted","Data":"f00597a329d0f042a521839841827b59b73afc4ac555e10e35645ff23fa58a2b"} Nov 28 07:00:54 crc kubenswrapper[4889]: I1128 07:00:54.412601 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-hqn2j" event={"ID":"412d9388-319f-4bb4-9c09-b698047ebf8d","Type":"ContainerStarted","Data":"bfd49bc7a929e12d3058f245a9726437c96443ba7e712f3e088b84f2f2a664f4"} Nov 28 07:00:54 crc kubenswrapper[4889]: I1128 07:00:54.412647 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-hqn2j" event={"ID":"412d9388-319f-4bb4-9c09-b698047ebf8d","Type":"ContainerStarted","Data":"3fe5af381813bb0cba4ea1e9d4dd3a8f9853785b401a0cba3b163b93f0a4880b"} Nov 28 07:00:54 crc kubenswrapper[4889]: I1128 07:00:54.415444 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" event={"ID":"612daf5f-d1e1-4aa9-b972-9d8ab3ea3211","Type":"ContainerStarted","Data":"e03c84d249cb9223e9eabbb522a2dc41652c6dda10b23a434e4235e2019e84b6"} Nov 28 07:00:54 crc kubenswrapper[4889]: I1128 07:00:54.416266 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" event={"ID":"6b4620ef-3cb6-45a5-8787-58e934465bac","Type":"ContainerStarted","Data":"1c0965d269b4fcd032622ede553b4f5c5d1b4620f171dd639e75ce37bab99b5f"} Nov 28 07:00:54 crc kubenswrapper[4889]: I1128 07:00:54.428340 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bb776c56c-hqn2j" podStartSLOduration=1.428315002 podStartE2EDuration="1.428315002s" podCreationTimestamp="2025-11-28 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:00:54.428150468 +0000 UTC m=+777.398384623" watchObservedRunningTime="2025-11-28 07:00:54.428315002 +0000 UTC m=+777.398549187" Nov 28 07:00:56 crc kubenswrapper[4889]: I1128 07:00:56.430453 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" event={"ID":"6b4620ef-3cb6-45a5-8787-58e934465bac","Type":"ContainerStarted","Data":"7ed00baac5fb4b054bced897e8e8ea3d063c77160d748fa04ea04d381dbf81a8"} Nov 28 07:00:56 crc kubenswrapper[4889]: I1128 07:00:56.431746 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:00:56 crc kubenswrapper[4889]: I1128 07:00:56.434487 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" event={"ID":"612daf5f-d1e1-4aa9-b972-9d8ab3ea3211","Type":"ContainerStarted","Data":"f7da75d7b5411381286266d143d0c8caeac7d634aa62b80f5cedbcc4f0022404"} Nov 28 07:00:56 crc kubenswrapper[4889]: I1128 07:00:56.436359 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8pw8z" event={"ID":"cf16260c-c349-4586-a9db-278bbf0cbb99","Type":"ContainerStarted","Data":"00e32c0b1f0f161fbbaafcaa7cda3474e723ad3c2b9a0cdbf3ad63b4bff0c2d5"} Nov 28 07:00:56 crc kubenswrapper[4889]: I1128 07:00:56.436507 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:00:56 crc kubenswrapper[4889]: I1128 07:00:56.448337 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" podStartSLOduration=2.8687787609999997 podStartE2EDuration="4.448322693s" podCreationTimestamp="2025-11-28 07:00:52 +0000 UTC" firstStartedPulling="2025-11-28 07:00:54.214337462 +0000 UTC m=+777.184571617" lastFinishedPulling="2025-11-28 07:00:55.793881404 +0000 UTC m=+778.764115549" observedRunningTime="2025-11-28 07:00:56.44546676 +0000 UTC m=+779.415700935" watchObservedRunningTime="2025-11-28 07:00:56.448322693 +0000 UTC m=+779.418556838" Nov 28 07:00:56 crc kubenswrapper[4889]: I1128 07:00:56.462627 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8pw8z" podStartSLOduration=1.910251688 podStartE2EDuration="4.462611697s" podCreationTimestamp="2025-11-28 07:00:52 +0000 UTC" firstStartedPulling="2025-11-28 07:00:53.237826211 +0000 UTC m=+776.208060356" lastFinishedPulling="2025-11-28 07:00:55.79018621 +0000 UTC m=+778.760420365" observedRunningTime="2025-11-28 07:00:56.46075926 +0000 UTC m=+779.430993425" watchObservedRunningTime="2025-11-28 07:00:56.462611697 +0000 UTC m=+779.432845852" Nov 28 07:00:57 crc kubenswrapper[4889]: I1128 07:00:57.443894 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" event={"ID":"193c905f-411f-4fa6-bbfd-83039c4d3d8b","Type":"ContainerStarted","Data":"813dc27703dbe20ae62655b22ae06963077244992ab6478cebeb01f4ace985fa"} Nov 28 07:00:57 crc kubenswrapper[4889]: I1128 07:00:57.468290 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2jkx8" podStartSLOduration=1.7823250179999999 podStartE2EDuration="4.4682718s" podCreationTimestamp="2025-11-28 07:00:53 +0000 UTC" firstStartedPulling="2025-11-28 07:00:54.134830367 +0000 UTC m=+777.105064522" lastFinishedPulling="2025-11-28 07:00:56.820777149 +0000 UTC m=+779.791011304" observedRunningTime="2025-11-28 07:00:57.459936768 +0000 UTC m=+780.430170923" watchObservedRunningTime="2025-11-28 07:00:57.4682718 +0000 UTC m=+780.438505955" Nov 28 07:00:57 crc kubenswrapper[4889]: I1128 07:00:57.587038 4889 scope.go:117] "RemoveContainer" containerID="ef0645ffeff9992c9a1c19e766d55c07ea21fa5bccaacb24159ca349745bc39b" Nov 28 07:00:58 crc kubenswrapper[4889]: I1128 07:00:58.452918 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vtjm7_68ddfdcf-000e-45ae-a737-d3dd28115d5b/kube-multus/2.log" Nov 28 07:00:58 crc kubenswrapper[4889]: I1128 07:00:58.782480 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:00:58 crc kubenswrapper[4889]: I1128 07:00:58.782556 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:00:59 crc kubenswrapper[4889]: I1128 07:00:59.470461 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" event={"ID":"612daf5f-d1e1-4aa9-b972-9d8ab3ea3211","Type":"ContainerStarted","Data":"d02243bf6fbaab302bb3d58290c55589eedec77c06ac7e3daa5bb255ccb16c69"} Nov 28 07:00:59 crc kubenswrapper[4889]: I1128 07:00:59.507817 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dlpnc" podStartSLOduration=2.551628384 podStartE2EDuration="7.507793068s" podCreationTimestamp="2025-11-28 07:00:52 +0000 UTC" firstStartedPulling="2025-11-28 07:00:53.404565708 +0000 UTC m=+776.374799863" lastFinishedPulling="2025-11-28 07:00:58.360730392 +0000 UTC m=+781.330964547" observedRunningTime="2025-11-28 07:00:59.497077395 +0000 UTC m=+782.467311580" watchObservedRunningTime="2025-11-28 07:00:59.507793068 +0000 UTC m=+782.478027233" Nov 28 07:01:03 crc kubenswrapper[4889]: I1128 07:01:03.228440 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8pw8z" Nov 28 07:01:03 crc kubenswrapper[4889]: I1128 07:01:03.579432 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:01:03 crc kubenswrapper[4889]: I1128 07:01:03.579488 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:01:03 crc kubenswrapper[4889]: I1128 07:01:03.597027 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:01:04 crc kubenswrapper[4889]: I1128 07:01:04.512469 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bb776c56c-hqn2j" Nov 28 07:01:04 crc kubenswrapper[4889]: I1128 07:01:04.563652 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9h4ng"] Nov 28 07:01:13 crc kubenswrapper[4889]: I1128 07:01:13.794771 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-n2k6g" Nov 28 07:01:24 crc kubenswrapper[4889]: I1128 07:01:24.840179 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5"] Nov 28 07:01:24 crc kubenswrapper[4889]: I1128 07:01:24.841602 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:24 crc kubenswrapper[4889]: I1128 07:01:24.843088 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 07:01:24 crc kubenswrapper[4889]: I1128 07:01:24.849002 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5"] Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.026809 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.026955 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.027019 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdx58\" (UniqueName: \"kubernetes.io/projected/5150180d-3afe-4c23-bfaa-8695d64fc2f9-kube-api-access-cdx58\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.128431 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.128491 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdx58\" (UniqueName: \"kubernetes.io/projected/5150180d-3afe-4c23-bfaa-8695d64fc2f9-kube-api-access-cdx58\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.128515 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.129025 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.129238 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.149680 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdx58\" (UniqueName: \"kubernetes.io/projected/5150180d-3afe-4c23-bfaa-8695d64fc2f9-kube-api-access-cdx58\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.158586 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:25 crc kubenswrapper[4889]: I1128 07:01:25.649674 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5"] Nov 28 07:01:25 crc kubenswrapper[4889]: W1128 07:01:25.656846 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5150180d_3afe_4c23_bfaa_8695d64fc2f9.slice/crio-a3b4c3db786e4a4b62bc787b8c13782c6051728dbbcff52ea5878c918cc5cb60 WatchSource:0}: Error finding container a3b4c3db786e4a4b62bc787b8c13782c6051728dbbcff52ea5878c918cc5cb60: Status 404 returned error can't find the container with id a3b4c3db786e4a4b62bc787b8c13782c6051728dbbcff52ea5878c918cc5cb60 Nov 28 07:01:26 crc kubenswrapper[4889]: I1128 07:01:26.642782 4889 generic.go:334] "Generic (PLEG): container finished" podID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerID="fd566ec6d76c575fcd474ffa868b90d0282eee320936af5c7f1e29ec98f6363f" exitCode=0 Nov 28 07:01:26 crc kubenswrapper[4889]: I1128 07:01:26.642842 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" event={"ID":"5150180d-3afe-4c23-bfaa-8695d64fc2f9","Type":"ContainerDied","Data":"fd566ec6d76c575fcd474ffa868b90d0282eee320936af5c7f1e29ec98f6363f"} Nov 28 07:01:26 crc kubenswrapper[4889]: I1128 07:01:26.642883 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" event={"ID":"5150180d-3afe-4c23-bfaa-8695d64fc2f9","Type":"ContainerStarted","Data":"a3b4c3db786e4a4b62bc787b8c13782c6051728dbbcff52ea5878c918cc5cb60"} Nov 28 07:01:28 crc kubenswrapper[4889]: I1128 07:01:28.659023 4889 generic.go:334] "Generic (PLEG): container finished" podID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerID="977cf913c736d5d254050a30bb956a897a39a8d8f0276a6955694b5225692ff7" exitCode=0 Nov 28 07:01:28 crc kubenswrapper[4889]: I1128 07:01:28.659137 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" event={"ID":"5150180d-3afe-4c23-bfaa-8695d64fc2f9","Type":"ContainerDied","Data":"977cf913c736d5d254050a30bb956a897a39a8d8f0276a6955694b5225692ff7"} Nov 28 07:01:28 crc kubenswrapper[4889]: I1128 07:01:28.791748 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:01:28 crc kubenswrapper[4889]: I1128 07:01:28.791800 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:01:29 crc kubenswrapper[4889]: I1128 07:01:29.613807 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9h4ng" podUID="aca1ea5e-ae14-45a8-9a19-acaea4176a13" containerName="console" containerID="cri-o://087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c" gracePeriod=15 Nov 28 07:01:29 crc kubenswrapper[4889]: I1128 07:01:29.668489 4889 generic.go:334] "Generic (PLEG): container finished" podID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerID="39334ab689cdeee3006028faf4ebd82ac7eb435d421778d937a3a6c8c7387876" exitCode=0 Nov 28 07:01:29 crc kubenswrapper[4889]: I1128 07:01:29.668584 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" event={"ID":"5150180d-3afe-4c23-bfaa-8695d64fc2f9","Type":"ContainerDied","Data":"39334ab689cdeee3006028faf4ebd82ac7eb435d421778d937a3a6c8c7387876"} Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.008749 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9h4ng_aca1ea5e-ae14-45a8-9a19-acaea4176a13/console/0.log" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.009189 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.195033 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-oauth-serving-cert\") pod \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.195100 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktb7c\" (UniqueName: \"kubernetes.io/projected/aca1ea5e-ae14-45a8-9a19-acaea4176a13-kube-api-access-ktb7c\") pod \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.195131 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-trusted-ca-bundle\") pod \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.195147 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-oauth-config\") pod \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.195187 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-serving-cert\") pod \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.195229 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-config\") pod \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.195246 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-service-ca\") pod \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\" (UID: \"aca1ea5e-ae14-45a8-9a19-acaea4176a13\") " Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.196015 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-service-ca" (OuterVolumeSpecName: "service-ca") pod "aca1ea5e-ae14-45a8-9a19-acaea4176a13" (UID: "aca1ea5e-ae14-45a8-9a19-acaea4176a13"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.196069 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "aca1ea5e-ae14-45a8-9a19-acaea4176a13" (UID: "aca1ea5e-ae14-45a8-9a19-acaea4176a13"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.196079 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "aca1ea5e-ae14-45a8-9a19-acaea4176a13" (UID: "aca1ea5e-ae14-45a8-9a19-acaea4176a13"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.196520 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-config" (OuterVolumeSpecName: "console-config") pod "aca1ea5e-ae14-45a8-9a19-acaea4176a13" (UID: "aca1ea5e-ae14-45a8-9a19-acaea4176a13"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.201744 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "aca1ea5e-ae14-45a8-9a19-acaea4176a13" (UID: "aca1ea5e-ae14-45a8-9a19-acaea4176a13"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.202248 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca1ea5e-ae14-45a8-9a19-acaea4176a13-kube-api-access-ktb7c" (OuterVolumeSpecName: "kube-api-access-ktb7c") pod "aca1ea5e-ae14-45a8-9a19-acaea4176a13" (UID: "aca1ea5e-ae14-45a8-9a19-acaea4176a13"). InnerVolumeSpecName "kube-api-access-ktb7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.204866 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "aca1ea5e-ae14-45a8-9a19-acaea4176a13" (UID: "aca1ea5e-ae14-45a8-9a19-acaea4176a13"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.296787 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.296818 4889 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.296829 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktb7c\" (UniqueName: \"kubernetes.io/projected/aca1ea5e-ae14-45a8-9a19-acaea4176a13-kube-api-access-ktb7c\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.296838 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.296847 4889 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.296855 4889 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.296863 4889 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca1ea5e-ae14-45a8-9a19-acaea4176a13-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.677782 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9h4ng_aca1ea5e-ae14-45a8-9a19-acaea4176a13/console/0.log" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.677829 4889 generic.go:334] "Generic (PLEG): container finished" podID="aca1ea5e-ae14-45a8-9a19-acaea4176a13" containerID="087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c" exitCode=2 Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.677959 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9h4ng" event={"ID":"aca1ea5e-ae14-45a8-9a19-acaea4176a13","Type":"ContainerDied","Data":"087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c"} Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.678037 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9h4ng" event={"ID":"aca1ea5e-ae14-45a8-9a19-acaea4176a13","Type":"ContainerDied","Data":"7837bb48b111dd23debb58e6bebc9e639948e9caef15144ca265fd172dd1ca68"} Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.678074 4889 scope.go:117] "RemoveContainer" containerID="087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.677978 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9h4ng" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.705841 4889 scope.go:117] "RemoveContainer" containerID="087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c" Nov 28 07:01:30 crc kubenswrapper[4889]: E1128 07:01:30.706598 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c\": container with ID starting with 087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c not found: ID does not exist" containerID="087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.706683 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c"} err="failed to get container status \"087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c\": rpc error: code = NotFound desc = could not find container \"087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c\": container with ID starting with 087104c7b6216c05d3f51e0d9c1e77d0e54e57a8d440def7a64c3cbba1de9e3c not found: ID does not exist" Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.725609 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9h4ng"] Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.732010 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9h4ng"] Nov 28 07:01:30 crc kubenswrapper[4889]: I1128 07:01:30.923600 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.010901 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-bundle\") pod \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.011123 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-util\") pod \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.011153 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdx58\" (UniqueName: \"kubernetes.io/projected/5150180d-3afe-4c23-bfaa-8695d64fc2f9-kube-api-access-cdx58\") pod \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\" (UID: \"5150180d-3afe-4c23-bfaa-8695d64fc2f9\") " Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.012590 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-bundle" (OuterVolumeSpecName: "bundle") pod "5150180d-3afe-4c23-bfaa-8695d64fc2f9" (UID: "5150180d-3afe-4c23-bfaa-8695d64fc2f9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.018502 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5150180d-3afe-4c23-bfaa-8695d64fc2f9-kube-api-access-cdx58" (OuterVolumeSpecName: "kube-api-access-cdx58") pod "5150180d-3afe-4c23-bfaa-8695d64fc2f9" (UID: "5150180d-3afe-4c23-bfaa-8695d64fc2f9"). InnerVolumeSpecName "kube-api-access-cdx58". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.112896 4889 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.112957 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdx58\" (UniqueName: \"kubernetes.io/projected/5150180d-3afe-4c23-bfaa-8695d64fc2f9-kube-api-access-cdx58\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.344907 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca1ea5e-ae14-45a8-9a19-acaea4176a13" path="/var/lib/kubelet/pods/aca1ea5e-ae14-45a8-9a19-acaea4176a13/volumes" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.423588 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-util" (OuterVolumeSpecName: "util") pod "5150180d-3afe-4c23-bfaa-8695d64fc2f9" (UID: "5150180d-3afe-4c23-bfaa-8695d64fc2f9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.519004 4889 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5150180d-3afe-4c23-bfaa-8695d64fc2f9-util\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.699590 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" event={"ID":"5150180d-3afe-4c23-bfaa-8695d64fc2f9","Type":"ContainerDied","Data":"a3b4c3db786e4a4b62bc787b8c13782c6051728dbbcff52ea5878c918cc5cb60"} Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.701046 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b4c3db786e4a4b62bc787b8c13782c6051728dbbcff52ea5878c918cc5cb60" Nov 28 07:01:31 crc kubenswrapper[4889]: I1128 07:01:31.699697 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.425107 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq"] Nov 28 07:01:40 crc kubenswrapper[4889]: E1128 07:01:40.425684 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerName="util" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.425700 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerName="util" Nov 28 07:01:40 crc kubenswrapper[4889]: E1128 07:01:40.425729 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca1ea5e-ae14-45a8-9a19-acaea4176a13" containerName="console" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.425738 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca1ea5e-ae14-45a8-9a19-acaea4176a13" containerName="console" Nov 28 07:01:40 crc kubenswrapper[4889]: E1128 07:01:40.425757 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerName="pull" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.425765 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerName="pull" Nov 28 07:01:40 crc kubenswrapper[4889]: E1128 07:01:40.425778 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerName="extract" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.425786 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerName="extract" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.425902 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5150180d-3afe-4c23-bfaa-8695d64fc2f9" containerName="extract" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.425917 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca1ea5e-ae14-45a8-9a19-acaea4176a13" containerName="console" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.426362 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.428556 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.429157 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.429768 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.430215 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.431668 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5cq8b" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.487224 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq"] Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.523143 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b79zr\" (UniqueName: \"kubernetes.io/projected/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-kube-api-access-b79zr\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.523195 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-webhook-cert\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.523292 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-apiservice-cert\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.624109 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-apiservice-cert\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.624542 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b79zr\" (UniqueName: \"kubernetes.io/projected/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-kube-api-access-b79zr\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.624574 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-webhook-cert\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.630859 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-webhook-cert\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.631173 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-apiservice-cert\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.651734 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b79zr\" (UniqueName: \"kubernetes.io/projected/f6ef069d-811d-4f18-a4e9-d7fa63b0096f-kube-api-access-b79zr\") pod \"metallb-operator-controller-manager-69d9449997-wlhbq\" (UID: \"f6ef069d-811d-4f18-a4e9-d7fa63b0096f\") " pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.689926 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl"] Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.690748 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.695064 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mqss7" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.696667 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.700961 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.709145 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl"] Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.725365 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-webhook-cert\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.725453 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-apiservice-cert\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.725531 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztzs\" (UniqueName: \"kubernetes.io/projected/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-kube-api-access-9ztzs\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.743386 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.826011 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztzs\" (UniqueName: \"kubernetes.io/projected/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-kube-api-access-9ztzs\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.826067 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-webhook-cert\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.826132 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-apiservice-cert\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.830013 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-apiservice-cert\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.830082 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-webhook-cert\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:40 crc kubenswrapper[4889]: I1128 07:01:40.842179 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztzs\" (UniqueName: \"kubernetes.io/projected/e8754ebc-1d87-4dfb-ac08-9c010fbe8109-kube-api-access-9ztzs\") pod \"metallb-operator-webhook-server-7f6b649f7b-vt4wl\" (UID: \"e8754ebc-1d87-4dfb-ac08-9c010fbe8109\") " pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:41 crc kubenswrapper[4889]: I1128 07:01:41.000526 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq"] Nov 28 07:01:41 crc kubenswrapper[4889]: W1128 07:01:41.006415 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ef069d_811d_4f18_a4e9_d7fa63b0096f.slice/crio-0678b3975dfe81bd9d1b5142ac09c51f7dda68a9efaa07e3d74d1a9d24b3f019 WatchSource:0}: Error finding container 0678b3975dfe81bd9d1b5142ac09c51f7dda68a9efaa07e3d74d1a9d24b3f019: Status 404 returned error can't find the container with id 0678b3975dfe81bd9d1b5142ac09c51f7dda68a9efaa07e3d74d1a9d24b3f019 Nov 28 07:01:41 crc kubenswrapper[4889]: I1128 07:01:41.026782 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:41 crc kubenswrapper[4889]: I1128 07:01:41.344164 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl"] Nov 28 07:01:41 crc kubenswrapper[4889]: W1128 07:01:41.363543 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8754ebc_1d87_4dfb_ac08_9c010fbe8109.slice/crio-b45ec1374f10f5949375d23eaf3e369865353c2f0adc48fc8bb41c60a81b940f WatchSource:0}: Error finding container b45ec1374f10f5949375d23eaf3e369865353c2f0adc48fc8bb41c60a81b940f: Status 404 returned error can't find the container with id b45ec1374f10f5949375d23eaf3e369865353c2f0adc48fc8bb41c60a81b940f Nov 28 07:01:41 crc kubenswrapper[4889]: I1128 07:01:41.760379 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" event={"ID":"e8754ebc-1d87-4dfb-ac08-9c010fbe8109","Type":"ContainerStarted","Data":"b45ec1374f10f5949375d23eaf3e369865353c2f0adc48fc8bb41c60a81b940f"} Nov 28 07:01:41 crc kubenswrapper[4889]: I1128 07:01:41.761449 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" event={"ID":"f6ef069d-811d-4f18-a4e9-d7fa63b0096f","Type":"ContainerStarted","Data":"0678b3975dfe81bd9d1b5142ac09c51f7dda68a9efaa07e3d74d1a9d24b3f019"} Nov 28 07:01:46 crc kubenswrapper[4889]: I1128 07:01:46.818749 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" event={"ID":"f6ef069d-811d-4f18-a4e9-d7fa63b0096f","Type":"ContainerStarted","Data":"b0d3cdec027e031753b06e517f67e74210be350ed5bde6d943947e62a7bdf354"} Nov 28 07:01:46 crc kubenswrapper[4889]: I1128 07:01:46.819252 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:01:46 crc kubenswrapper[4889]: I1128 07:01:46.820345 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" event={"ID":"e8754ebc-1d87-4dfb-ac08-9c010fbe8109","Type":"ContainerStarted","Data":"d0b744f1492cfc16b84562fc97d93ab34c86f05ed8422587e97ec21402136623"} Nov 28 07:01:46 crc kubenswrapper[4889]: I1128 07:01:46.820929 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:01:46 crc kubenswrapper[4889]: I1128 07:01:46.843126 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" podStartSLOduration=1.62721992 podStartE2EDuration="6.84310082s" podCreationTimestamp="2025-11-28 07:01:40 +0000 UTC" firstStartedPulling="2025-11-28 07:01:41.009886916 +0000 UTC m=+823.980121071" lastFinishedPulling="2025-11-28 07:01:46.225767816 +0000 UTC m=+829.196001971" observedRunningTime="2025-11-28 07:01:46.841485139 +0000 UTC m=+829.811719314" watchObservedRunningTime="2025-11-28 07:01:46.84310082 +0000 UTC m=+829.813335025" Nov 28 07:01:46 crc kubenswrapper[4889]: I1128 07:01:46.876130 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" podStartSLOduration=1.9980294440000002 podStartE2EDuration="6.87609746s" podCreationTimestamp="2025-11-28 07:01:40 +0000 UTC" firstStartedPulling="2025-11-28 07:01:41.366820207 +0000 UTC m=+824.337054362" lastFinishedPulling="2025-11-28 07:01:46.244888223 +0000 UTC m=+829.215122378" observedRunningTime="2025-11-28 07:01:46.867148022 +0000 UTC m=+829.837382227" watchObservedRunningTime="2025-11-28 07:01:46.87609746 +0000 UTC m=+829.846331655" Nov 28 07:01:58 crc kubenswrapper[4889]: I1128 07:01:58.782398 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:01:58 crc kubenswrapper[4889]: I1128 07:01:58.782917 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:01:58 crc kubenswrapper[4889]: I1128 07:01:58.782959 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 07:01:58 crc kubenswrapper[4889]: I1128 07:01:58.783461 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ebc63c9a59babecd1fd35c9530a11a72ee07b00bf300c1205eb3965dda30903"} pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:01:58 crc kubenswrapper[4889]: I1128 07:01:58.783516 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" containerID="cri-o://7ebc63c9a59babecd1fd35c9530a11a72ee07b00bf300c1205eb3965dda30903" gracePeriod=600 Nov 28 07:01:59 crc kubenswrapper[4889]: I1128 07:01:59.887368 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerID="7ebc63c9a59babecd1fd35c9530a11a72ee07b00bf300c1205eb3965dda30903" exitCode=0 Nov 28 07:01:59 crc kubenswrapper[4889]: I1128 07:01:59.887445 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerDied","Data":"7ebc63c9a59babecd1fd35c9530a11a72ee07b00bf300c1205eb3965dda30903"} Nov 28 07:01:59 crc kubenswrapper[4889]: I1128 07:01:59.887779 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"8bcf61faea8df3b4bedcdbe66375ffe429928fd4ff7747468313822736645149"} Nov 28 07:01:59 crc kubenswrapper[4889]: I1128 07:01:59.887802 4889 scope.go:117] "RemoveContainer" containerID="7cb3b598692f9ebef6839e9935cad4d68f3c8d646dc9a22d7d400e870e77c284" Nov 28 07:02:01 crc kubenswrapper[4889]: I1128 07:02:01.031192 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f6b649f7b-vt4wl" Nov 28 07:02:20 crc kubenswrapper[4889]: I1128 07:02:20.746585 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69d9449997-wlhbq" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.461788 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m"] Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.462649 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.464858 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.464891 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5qftl" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.472863 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m"] Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.495948 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-267hv"] Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.497897 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.500284 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.501602 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.552586 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4sdtt"] Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.553457 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.555662 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.556002 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.556121 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.558882 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-5jh2q" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.569509 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-h5xvn"] Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.570784 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.576738 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.589918 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-h5xvn"] Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657509 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-startup\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657574 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzfvc\" (UniqueName: \"kubernetes.io/projected/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-kube-api-access-zzfvc\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657608 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlv9\" (UniqueName: \"kubernetes.io/projected/f466b540-ed9d-495d-8cf2-e6879ab71d05-kube-api-access-pnlv9\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657630 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-conf\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657652 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-metrics-certs\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657674 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f466b540-ed9d-495d-8cf2-e6879ab71d05-metallb-excludel2\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657701 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-metrics-certs\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657738 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-sockets\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657796 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-reloader\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657824 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpj5j\" (UniqueName: \"kubernetes.io/projected/6376e2a1-c497-4e4f-a962-4b7af74a0cbb-kube-api-access-lpj5j\") pod \"frr-k8s-webhook-server-7fcb986d4-9nk2m\" (UID: \"6376e2a1-c497-4e4f-a962-4b7af74a0cbb\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657846 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-metrics\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657874 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6376e2a1-c497-4e4f-a962-4b7af74a0cbb-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9nk2m\" (UID: \"6376e2a1-c497-4e4f-a962-4b7af74a0cbb\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.657902 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759562 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-reloader\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759603 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpj5j\" (UniqueName: \"kubernetes.io/projected/6376e2a1-c497-4e4f-a962-4b7af74a0cbb-kube-api-access-lpj5j\") pod \"frr-k8s-webhook-server-7fcb986d4-9nk2m\" (UID: \"6376e2a1-c497-4e4f-a962-4b7af74a0cbb\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759622 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-metrics\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759645 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6376e2a1-c497-4e4f-a962-4b7af74a0cbb-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9nk2m\" (UID: \"6376e2a1-c497-4e4f-a962-4b7af74a0cbb\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759663 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759687 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbntc\" (UniqueName: \"kubernetes.io/projected/63880bcb-6dcc-4936-a476-c3622733a4cf-kube-api-access-mbntc\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759725 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-startup\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759744 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63880bcb-6dcc-4936-a476-c3622733a4cf-metrics-certs\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759766 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzfvc\" (UniqueName: \"kubernetes.io/projected/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-kube-api-access-zzfvc\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759782 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlv9\" (UniqueName: \"kubernetes.io/projected/f466b540-ed9d-495d-8cf2-e6879ab71d05-kube-api-access-pnlv9\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759823 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-conf\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759840 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-metrics-certs\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759855 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f466b540-ed9d-495d-8cf2-e6879ab71d05-metallb-excludel2\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759877 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63880bcb-6dcc-4936-a476-c3622733a4cf-cert\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759893 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-metrics-certs\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.759906 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-sockets\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.760018 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-reloader\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: E1128 07:02:21.760133 4889 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 07:02:21 crc kubenswrapper[4889]: E1128 07:02:21.760180 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist podName:f466b540-ed9d-495d-8cf2-e6879ab71d05 nodeName:}" failed. No retries permitted until 2025-11-28 07:02:22.260163176 +0000 UTC m=+865.230397331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist") pod "speaker-4sdtt" (UID: "f466b540-ed9d-495d-8cf2-e6879ab71d05") : secret "metallb-memberlist" not found Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.760215 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-sockets\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.760767 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-conf\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.761146 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-frr-startup\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.761209 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-metrics\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.761549 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f466b540-ed9d-495d-8cf2-e6879ab71d05-metallb-excludel2\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.766053 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-metrics-certs\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.766061 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-metrics-certs\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.767392 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6376e2a1-c497-4e4f-a962-4b7af74a0cbb-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9nk2m\" (UID: \"6376e2a1-c497-4e4f-a962-4b7af74a0cbb\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.776184 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpj5j\" (UniqueName: \"kubernetes.io/projected/6376e2a1-c497-4e4f-a962-4b7af74a0cbb-kube-api-access-lpj5j\") pod \"frr-k8s-webhook-server-7fcb986d4-9nk2m\" (UID: \"6376e2a1-c497-4e4f-a962-4b7af74a0cbb\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.778272 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzfvc\" (UniqueName: \"kubernetes.io/projected/6cbe65b7-1028-430c-a03b-48ecae8cd4e6-kube-api-access-zzfvc\") pod \"frr-k8s-267hv\" (UID: \"6cbe65b7-1028-430c-a03b-48ecae8cd4e6\") " pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.780414 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlv9\" (UniqueName: \"kubernetes.io/projected/f466b540-ed9d-495d-8cf2-e6879ab71d05-kube-api-access-pnlv9\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.785044 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.814112 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.860585 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbntc\" (UniqueName: \"kubernetes.io/projected/63880bcb-6dcc-4936-a476-c3622733a4cf-kube-api-access-mbntc\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.860853 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63880bcb-6dcc-4936-a476-c3622733a4cf-metrics-certs\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.860886 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63880bcb-6dcc-4936-a476-c3622733a4cf-cert\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.864292 4889 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.864654 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63880bcb-6dcc-4936-a476-c3622733a4cf-metrics-certs\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.880375 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63880bcb-6dcc-4936-a476-c3622733a4cf-cert\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.884235 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbntc\" (UniqueName: \"kubernetes.io/projected/63880bcb-6dcc-4936-a476-c3622733a4cf-kube-api-access-mbntc\") pod \"controller-f8648f98b-h5xvn\" (UID: \"63880bcb-6dcc-4936-a476-c3622733a4cf\") " pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:21 crc kubenswrapper[4889]: I1128 07:02:21.924944 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:22 crc kubenswrapper[4889]: I1128 07:02:22.038830 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerStarted","Data":"9d084893a6684981f4a1e7ef21a8cb08a0b8d17e58f235e8d1b0d2a8b9e05f48"} Nov 28 07:02:22 crc kubenswrapper[4889]: I1128 07:02:22.118783 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-h5xvn"] Nov 28 07:02:22 crc kubenswrapper[4889]: W1128 07:02:22.124345 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63880bcb_6dcc_4936_a476_c3622733a4cf.slice/crio-720a40e3d373183880574ecdc7e7eaee78dbdce8d7a7e21c93ba02b76d10b141 WatchSource:0}: Error finding container 720a40e3d373183880574ecdc7e7eaee78dbdce8d7a7e21c93ba02b76d10b141: Status 404 returned error can't find the container with id 720a40e3d373183880574ecdc7e7eaee78dbdce8d7a7e21c93ba02b76d10b141 Nov 28 07:02:22 crc kubenswrapper[4889]: I1128 07:02:22.186402 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m"] Nov 28 07:02:22 crc kubenswrapper[4889]: W1128 07:02:22.196826 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6376e2a1_c497_4e4f_a962_4b7af74a0cbb.slice/crio-b7693329f539c6985bce273a8b8f30dd410918de29e38fb8b15e69e59fdf705d WatchSource:0}: Error finding container b7693329f539c6985bce273a8b8f30dd410918de29e38fb8b15e69e59fdf705d: Status 404 returned error can't find the container with id b7693329f539c6985bce273a8b8f30dd410918de29e38fb8b15e69e59fdf705d Nov 28 07:02:22 crc kubenswrapper[4889]: I1128 07:02:22.266403 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:22 crc kubenswrapper[4889]: E1128 07:02:22.266574 4889 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 07:02:22 crc kubenswrapper[4889]: E1128 07:02:22.266627 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist podName:f466b540-ed9d-495d-8cf2-e6879ab71d05 nodeName:}" failed. No retries permitted until 2025-11-28 07:02:23.266613402 +0000 UTC m=+866.236847557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist") pod "speaker-4sdtt" (UID: "f466b540-ed9d-495d-8cf2-e6879ab71d05") : secret "metallb-memberlist" not found Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.046877 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-h5xvn" event={"ID":"63880bcb-6dcc-4936-a476-c3622733a4cf","Type":"ContainerStarted","Data":"27550133f86fc36cb310a329e7448c4456c3224d102f9c8b323d15419cecff35"} Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.046923 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-h5xvn" event={"ID":"63880bcb-6dcc-4936-a476-c3622733a4cf","Type":"ContainerStarted","Data":"d6e77fe730765788f5b5f1e9712d0ec23b4d46101a5128cfccf4423a2eb5b130"} Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.046936 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-h5xvn" event={"ID":"63880bcb-6dcc-4936-a476-c3622733a4cf","Type":"ContainerStarted","Data":"720a40e3d373183880574ecdc7e7eaee78dbdce8d7a7e21c93ba02b76d10b141"} Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.047006 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.048091 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" event={"ID":"6376e2a1-c497-4e4f-a962-4b7af74a0cbb","Type":"ContainerStarted","Data":"b7693329f539c6985bce273a8b8f30dd410918de29e38fb8b15e69e59fdf705d"} Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.065623 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-h5xvn" podStartSLOduration=2.065606019 podStartE2EDuration="2.065606019s" podCreationTimestamp="2025-11-28 07:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:02:23.060835916 +0000 UTC m=+866.031070071" watchObservedRunningTime="2025-11-28 07:02:23.065606019 +0000 UTC m=+866.035840174" Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.278197 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.284107 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f466b540-ed9d-495d-8cf2-e6879ab71d05-memberlist\") pod \"speaker-4sdtt\" (UID: \"f466b540-ed9d-495d-8cf2-e6879ab71d05\") " pod="metallb-system/speaker-4sdtt" Nov 28 07:02:23 crc kubenswrapper[4889]: I1128 07:02:23.372536 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4sdtt" Nov 28 07:02:23 crc kubenswrapper[4889]: W1128 07:02:23.411922 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf466b540_ed9d_495d_8cf2_e6879ab71d05.slice/crio-f608c309692bc92710367f59d66b3b86cc0ef5cd16df30cfd576542d0b4fdd64 WatchSource:0}: Error finding container f608c309692bc92710367f59d66b3b86cc0ef5cd16df30cfd576542d0b4fdd64: Status 404 returned error can't find the container with id f608c309692bc92710367f59d66b3b86cc0ef5cd16df30cfd576542d0b4fdd64 Nov 28 07:02:24 crc kubenswrapper[4889]: I1128 07:02:24.066117 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4sdtt" event={"ID":"f466b540-ed9d-495d-8cf2-e6879ab71d05","Type":"ContainerStarted","Data":"8e66ecf0919c535fd1cd829f29ae74b808ecf2e0c3d0cdccee761f475d0910e5"} Nov 28 07:02:24 crc kubenswrapper[4889]: I1128 07:02:24.066379 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4sdtt" event={"ID":"f466b540-ed9d-495d-8cf2-e6879ab71d05","Type":"ContainerStarted","Data":"07e44373539e3e6190aaec9800b8d9596d6ba35b12afc21328e8bd8da54a15ae"} Nov 28 07:02:24 crc kubenswrapper[4889]: I1128 07:02:24.066390 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4sdtt" event={"ID":"f466b540-ed9d-495d-8cf2-e6879ab71d05","Type":"ContainerStarted","Data":"f608c309692bc92710367f59d66b3b86cc0ef5cd16df30cfd576542d0b4fdd64"} Nov 28 07:02:24 crc kubenswrapper[4889]: I1128 07:02:24.066607 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4sdtt" Nov 28 07:02:24 crc kubenswrapper[4889]: I1128 07:02:24.083304 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4sdtt" podStartSLOduration=3.083285695 podStartE2EDuration="3.083285695s" podCreationTimestamp="2025-11-28 07:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:02:24.082113445 +0000 UTC m=+867.052347600" watchObservedRunningTime="2025-11-28 07:02:24.083285695 +0000 UTC m=+867.053519850" Nov 28 07:02:30 crc kubenswrapper[4889]: I1128 07:02:30.139293 4889 generic.go:334] "Generic (PLEG): container finished" podID="6cbe65b7-1028-430c-a03b-48ecae8cd4e6" containerID="8b99e2081250ca382eab4311eb956a5417aa273489b21e5da0fc1af74a11be08" exitCode=0 Nov 28 07:02:30 crc kubenswrapper[4889]: I1128 07:02:30.139485 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerDied","Data":"8b99e2081250ca382eab4311eb956a5417aa273489b21e5da0fc1af74a11be08"} Nov 28 07:02:30 crc kubenswrapper[4889]: I1128 07:02:30.142486 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" event={"ID":"6376e2a1-c497-4e4f-a962-4b7af74a0cbb","Type":"ContainerStarted","Data":"359da9ce2d492d363508766ce6c5cf343161cf27d8a8c497112493b9e5572fa4"} Nov 28 07:02:30 crc kubenswrapper[4889]: I1128 07:02:30.142732 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:30 crc kubenswrapper[4889]: I1128 07:02:30.184653 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" podStartSLOduration=1.5103392599999999 podStartE2EDuration="9.184634053s" podCreationTimestamp="2025-11-28 07:02:21 +0000 UTC" firstStartedPulling="2025-11-28 07:02:22.19889395 +0000 UTC m=+865.169128105" lastFinishedPulling="2025-11-28 07:02:29.873188723 +0000 UTC m=+872.843422898" observedRunningTime="2025-11-28 07:02:30.181970214 +0000 UTC m=+873.152204379" watchObservedRunningTime="2025-11-28 07:02:30.184634053 +0000 UTC m=+873.154868208" Nov 28 07:02:31 crc kubenswrapper[4889]: I1128 07:02:31.150438 4889 generic.go:334] "Generic (PLEG): container finished" podID="6cbe65b7-1028-430c-a03b-48ecae8cd4e6" containerID="e55ded3ea90b6169fb382b61bb036cfcfa6218b44be268b585fee793348de9d3" exitCode=0 Nov 28 07:02:31 crc kubenswrapper[4889]: I1128 07:02:31.150560 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerDied","Data":"e55ded3ea90b6169fb382b61bb036cfcfa6218b44be268b585fee793348de9d3"} Nov 28 07:02:32 crc kubenswrapper[4889]: I1128 07:02:32.160489 4889 generic.go:334] "Generic (PLEG): container finished" podID="6cbe65b7-1028-430c-a03b-48ecae8cd4e6" containerID="7393e8e41575584d5c16f8521963ce6fe825799ed167ad93ad01f0dbe3f237c9" exitCode=0 Nov 28 07:02:32 crc kubenswrapper[4889]: I1128 07:02:32.160695 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerDied","Data":"7393e8e41575584d5c16f8521963ce6fe825799ed167ad93ad01f0dbe3f237c9"} Nov 28 07:02:33 crc kubenswrapper[4889]: I1128 07:02:33.171827 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerStarted","Data":"2df4bbd8127744334217075ffa51976249faff1573badd0caa24820195945318"} Nov 28 07:02:33 crc kubenswrapper[4889]: I1128 07:02:33.171864 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerStarted","Data":"f7c82abfb81c35a3d8e34f6c46906541fcb80378a80aa6df8920a3b3381c7209"} Nov 28 07:02:33 crc kubenswrapper[4889]: I1128 07:02:33.171875 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerStarted","Data":"d246aa39ff78d46111813eaf2e89e3644f00ad8d9b5e55a65d0fd96b0fe4990c"} Nov 28 07:02:33 crc kubenswrapper[4889]: I1128 07:02:33.171883 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerStarted","Data":"057c9c88d8a87284c969ac1698b3d65478b708123463720fd772a1302c56f395"} Nov 28 07:02:33 crc kubenswrapper[4889]: I1128 07:02:33.376561 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4sdtt" Nov 28 07:02:34 crc kubenswrapper[4889]: I1128 07:02:34.180777 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerStarted","Data":"016460f5f056a7947a5d6db7ab4365bd6a0de628e66786f9c3031b968be0ee62"} Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.049485 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7"] Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.050543 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.054132 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.074888 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7"] Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.169882 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgb2\" (UniqueName: \"kubernetes.io/projected/950186ee-ac42-4e8b-b946-437c6c9d3c0b-kube-api-access-2kgb2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.170164 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.170298 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.191271 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-267hv" event={"ID":"6cbe65b7-1028-430c-a03b-48ecae8cd4e6","Type":"ContainerStarted","Data":"8c3f6e5ade1f5f37c54fbc416609bdb8b8cb7847f778a744e9555e81b143dbbf"} Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.192152 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.212407 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-267hv" podStartSLOduration=6.280182938 podStartE2EDuration="14.212386846s" podCreationTimestamp="2025-11-28 07:02:21 +0000 UTC" firstStartedPulling="2025-11-28 07:02:21.968244471 +0000 UTC m=+864.938478626" lastFinishedPulling="2025-11-28 07:02:29.900448369 +0000 UTC m=+872.870682534" observedRunningTime="2025-11-28 07:02:35.209865041 +0000 UTC m=+878.180099206" watchObservedRunningTime="2025-11-28 07:02:35.212386846 +0000 UTC m=+878.182621001" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.271095 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.271205 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgb2\" (UniqueName: \"kubernetes.io/projected/950186ee-ac42-4e8b-b946-437c6c9d3c0b-kube-api-access-2kgb2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.271234 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.271765 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.271753 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.290254 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgb2\" (UniqueName: \"kubernetes.io/projected/950186ee-ac42-4e8b-b946-437c6c9d3c0b-kube-api-access-2kgb2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.370007 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:35 crc kubenswrapper[4889]: I1128 07:02:35.600822 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7"] Nov 28 07:02:36 crc kubenswrapper[4889]: I1128 07:02:36.198476 4889 generic.go:334] "Generic (PLEG): container finished" podID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerID="6c6cb2857be780b3d3da34d3e9c8332b59f6febee3b54f1ccc80e9751af4109f" exitCode=0 Nov 28 07:02:36 crc kubenswrapper[4889]: I1128 07:02:36.198559 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" event={"ID":"950186ee-ac42-4e8b-b946-437c6c9d3c0b","Type":"ContainerDied","Data":"6c6cb2857be780b3d3da34d3e9c8332b59f6febee3b54f1ccc80e9751af4109f"} Nov 28 07:02:36 crc kubenswrapper[4889]: I1128 07:02:36.198780 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" event={"ID":"950186ee-ac42-4e8b-b946-437c6c9d3c0b","Type":"ContainerStarted","Data":"e046072e95b18b8531cf8c285273a423a67e49321a0b15b2a1594a98e7571704"} Nov 28 07:02:36 crc kubenswrapper[4889]: I1128 07:02:36.815323 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:36 crc kubenswrapper[4889]: I1128 07:02:36.848634 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:40 crc kubenswrapper[4889]: I1128 07:02:40.230772 4889 generic.go:334] "Generic (PLEG): container finished" podID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerID="9aa99732e008a1b4c82e0b146ee2128d9a999d61a4eab633c93d6a046a999dc5" exitCode=0 Nov 28 07:02:40 crc kubenswrapper[4889]: I1128 07:02:40.230831 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" event={"ID":"950186ee-ac42-4e8b-b946-437c6c9d3c0b","Type":"ContainerDied","Data":"9aa99732e008a1b4c82e0b146ee2128d9a999d61a4eab633c93d6a046a999dc5"} Nov 28 07:02:41 crc kubenswrapper[4889]: I1128 07:02:41.240830 4889 generic.go:334] "Generic (PLEG): container finished" podID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerID="d2e1849ecfa59461b978cb3ef665770cdfc75f72da1fb605f822d943e5d98e81" exitCode=0 Nov 28 07:02:41 crc kubenswrapper[4889]: I1128 07:02:41.240954 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" event={"ID":"950186ee-ac42-4e8b-b946-437c6c9d3c0b","Type":"ContainerDied","Data":"d2e1849ecfa59461b978cb3ef665770cdfc75f72da1fb605f822d943e5d98e81"} Nov 28 07:02:41 crc kubenswrapper[4889]: I1128 07:02:41.789843 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9nk2m" Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:41.928144 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-h5xvn" Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.477801 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.669118 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-util\") pod \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.669216 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-bundle\") pod \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.669285 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kgb2\" (UniqueName: \"kubernetes.io/projected/950186ee-ac42-4e8b-b946-437c6c9d3c0b-kube-api-access-2kgb2\") pod \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\" (UID: \"950186ee-ac42-4e8b-b946-437c6c9d3c0b\") " Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.670080 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-bundle" (OuterVolumeSpecName: "bundle") pod "950186ee-ac42-4e8b-b946-437c6c9d3c0b" (UID: "950186ee-ac42-4e8b-b946-437c6c9d3c0b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.674368 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950186ee-ac42-4e8b-b946-437c6c9d3c0b-kube-api-access-2kgb2" (OuterVolumeSpecName: "kube-api-access-2kgb2") pod "950186ee-ac42-4e8b-b946-437c6c9d3c0b" (UID: "950186ee-ac42-4e8b-b946-437c6c9d3c0b"). InnerVolumeSpecName "kube-api-access-2kgb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.679345 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-util" (OuterVolumeSpecName: "util") pod "950186ee-ac42-4e8b-b946-437c6c9d3c0b" (UID: "950186ee-ac42-4e8b-b946-437c6c9d3c0b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.771203 4889 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.771259 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kgb2\" (UniqueName: \"kubernetes.io/projected/950186ee-ac42-4e8b-b946-437c6c9d3c0b-kube-api-access-2kgb2\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:42 crc kubenswrapper[4889]: I1128 07:02:42.771271 4889 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950186ee-ac42-4e8b-b946-437c6c9d3c0b-util\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:43 crc kubenswrapper[4889]: I1128 07:02:43.256434 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" Nov 28 07:02:43 crc kubenswrapper[4889]: I1128 07:02:43.256417 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7" event={"ID":"950186ee-ac42-4e8b-b946-437c6c9d3c0b","Type":"ContainerDied","Data":"e046072e95b18b8531cf8c285273a423a67e49321a0b15b2a1594a98e7571704"} Nov 28 07:02:43 crc kubenswrapper[4889]: I1128 07:02:43.256570 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e046072e95b18b8531cf8c285273a423a67e49321a0b15b2a1594a98e7571704" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.636488 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49"] Nov 28 07:02:47 crc kubenswrapper[4889]: E1128 07:02:47.637229 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerName="pull" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.637251 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerName="pull" Nov 28 07:02:47 crc kubenswrapper[4889]: E1128 07:02:47.637271 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerName="util" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.637281 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerName="util" Nov 28 07:02:47 crc kubenswrapper[4889]: E1128 07:02:47.637311 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerName="extract" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.637324 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerName="extract" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.637475 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="950186ee-ac42-4e8b-b946-437c6c9d3c0b" containerName="extract" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.638101 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.639964 4889 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-lvf4g" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.639998 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.640684 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.690197 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49"] Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.731079 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrx9\" (UniqueName: \"kubernetes.io/projected/002e123a-2901-40af-9edc-12c0e205cd0c-kube-api-access-nxrx9\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mvq49\" (UID: \"002e123a-2901-40af-9edc-12c0e205cd0c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.731135 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/002e123a-2901-40af-9edc-12c0e205cd0c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mvq49\" (UID: \"002e123a-2901-40af-9edc-12c0e205cd0c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.831724 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrx9\" (UniqueName: \"kubernetes.io/projected/002e123a-2901-40af-9edc-12c0e205cd0c-kube-api-access-nxrx9\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mvq49\" (UID: \"002e123a-2901-40af-9edc-12c0e205cd0c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.831786 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/002e123a-2901-40af-9edc-12c0e205cd0c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mvq49\" (UID: \"002e123a-2901-40af-9edc-12c0e205cd0c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.832338 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/002e123a-2901-40af-9edc-12c0e205cd0c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mvq49\" (UID: \"002e123a-2901-40af-9edc-12c0e205cd0c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.854943 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrx9\" (UniqueName: \"kubernetes.io/projected/002e123a-2901-40af-9edc-12c0e205cd0c-kube-api-access-nxrx9\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mvq49\" (UID: \"002e123a-2901-40af-9edc-12c0e205cd0c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" Nov 28 07:02:47 crc kubenswrapper[4889]: I1128 07:02:47.955413 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" Nov 28 07:02:48 crc kubenswrapper[4889]: I1128 07:02:48.391849 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49"] Nov 28 07:02:48 crc kubenswrapper[4889]: W1128 07:02:48.404502 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod002e123a_2901_40af_9edc_12c0e205cd0c.slice/crio-66a2be170e426618dbcbf1428e0422d2dea079758191a4cdd2f79fd549ee861f WatchSource:0}: Error finding container 66a2be170e426618dbcbf1428e0422d2dea079758191a4cdd2f79fd549ee861f: Status 404 returned error can't find the container with id 66a2be170e426618dbcbf1428e0422d2dea079758191a4cdd2f79fd549ee861f Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.291963 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" event={"ID":"002e123a-2901-40af-9edc-12c0e205cd0c","Type":"ContainerStarted","Data":"66a2be170e426618dbcbf1428e0422d2dea079758191a4cdd2f79fd549ee861f"} Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.751090 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qbbbx"] Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.759644 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.762289 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-utilities\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.762398 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-catalog-content\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.762420 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mks\" (UniqueName: \"kubernetes.io/projected/6834453e-9d05-4475-b9b5-332c2f2a07ad-kube-api-access-f9mks\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.768580 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbbbx"] Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.863344 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-catalog-content\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.863388 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mks\" (UniqueName: \"kubernetes.io/projected/6834453e-9d05-4475-b9b5-332c2f2a07ad-kube-api-access-f9mks\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.863413 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-utilities\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.864017 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-utilities\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.864012 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-catalog-content\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:49 crc kubenswrapper[4889]: I1128 07:02:49.889665 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mks\" (UniqueName: \"kubernetes.io/projected/6834453e-9d05-4475-b9b5-332c2f2a07ad-kube-api-access-f9mks\") pod \"community-operators-qbbbx\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:50 crc kubenswrapper[4889]: I1128 07:02:50.088745 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:02:50 crc kubenswrapper[4889]: I1128 07:02:50.434490 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbbbx"] Nov 28 07:02:51 crc kubenswrapper[4889]: I1128 07:02:51.306459 4889 generic.go:334] "Generic (PLEG): container finished" podID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerID="03dfe7b55f450ab922e5dac33f787088b80b42b97d6e03cfc423a2822a6e6d8d" exitCode=0 Nov 28 07:02:51 crc kubenswrapper[4889]: I1128 07:02:51.306505 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbbbx" event={"ID":"6834453e-9d05-4475-b9b5-332c2f2a07ad","Type":"ContainerDied","Data":"03dfe7b55f450ab922e5dac33f787088b80b42b97d6e03cfc423a2822a6e6d8d"} Nov 28 07:02:51 crc kubenswrapper[4889]: I1128 07:02:51.306530 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbbbx" event={"ID":"6834453e-9d05-4475-b9b5-332c2f2a07ad","Type":"ContainerStarted","Data":"de65249a874d2a2a95f278ab5afef779dec7cf245c11053e2dfaa4c260a39f6a"} Nov 28 07:02:51 crc kubenswrapper[4889]: I1128 07:02:51.836495 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-267hv" Nov 28 07:02:56 crc kubenswrapper[4889]: I1128 07:02:56.337482 4889 generic.go:334] "Generic (PLEG): container finished" podID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerID="16afad3b986cc298338fa637a4ff1dc371069dd54a4dc51229a575a61fd56251" exitCode=0 Nov 28 07:02:56 crc kubenswrapper[4889]: I1128 07:02:56.337627 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbbbx" event={"ID":"6834453e-9d05-4475-b9b5-332c2f2a07ad","Type":"ContainerDied","Data":"16afad3b986cc298338fa637a4ff1dc371069dd54a4dc51229a575a61fd56251"} Nov 28 07:02:56 crc kubenswrapper[4889]: I1128 07:02:56.339435 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" event={"ID":"002e123a-2901-40af-9edc-12c0e205cd0c","Type":"ContainerStarted","Data":"0accf90ddc2adc000aa0d10c6f1bf845d1fbf3bf3d7e15969c4cbca69d542f56"} Nov 28 07:02:56 crc kubenswrapper[4889]: I1128 07:02:56.385641 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mvq49" podStartSLOduration=2.298938793 podStartE2EDuration="9.3856233s" podCreationTimestamp="2025-11-28 07:02:47 +0000 UTC" firstStartedPulling="2025-11-28 07:02:48.406690072 +0000 UTC m=+891.376924227" lastFinishedPulling="2025-11-28 07:02:55.493374579 +0000 UTC m=+898.463608734" observedRunningTime="2025-11-28 07:02:56.381435161 +0000 UTC m=+899.351669326" watchObservedRunningTime="2025-11-28 07:02:56.3856233 +0000 UTC m=+899.355857455" Nov 28 07:02:57 crc kubenswrapper[4889]: I1128 07:02:57.349340 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbbbx" event={"ID":"6834453e-9d05-4475-b9b5-332c2f2a07ad","Type":"ContainerStarted","Data":"5b50f8f4f27ba402cb57b3f939805cd89827a56bf1b42edf9e3ba6886c305108"} Nov 28 07:02:57 crc kubenswrapper[4889]: I1128 07:02:57.374338 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qbbbx" podStartSLOduration=2.778663296 podStartE2EDuration="8.374313306s" podCreationTimestamp="2025-11-28 07:02:49 +0000 UTC" firstStartedPulling="2025-11-28 07:02:51.308495339 +0000 UTC m=+894.278729494" lastFinishedPulling="2025-11-28 07:02:56.904145349 +0000 UTC m=+899.874379504" observedRunningTime="2025-11-28 07:02:57.373179857 +0000 UTC m=+900.343414012" watchObservedRunningTime="2025-11-28 07:02:57.374313306 +0000 UTC m=+900.344547461" Nov 28 07:03:00 crc kubenswrapper[4889]: I1128 07:03:00.089237 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:03:00 crc kubenswrapper[4889]: I1128 07:03:00.089580 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:03:00 crc kubenswrapper[4889]: I1128 07:03:00.141505 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.548079 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7bqh"] Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.549670 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.568201 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7bqh"] Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.619694 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35511fa-effe-470c-bb25-f144f1e21248-utilities\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.619755 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35511fa-effe-470c-bb25-f144f1e21248-catalog-content\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.619899 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4684\" (UniqueName: \"kubernetes.io/projected/e35511fa-effe-470c-bb25-f144f1e21248-kube-api-access-p4684\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.721674 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4684\" (UniqueName: \"kubernetes.io/projected/e35511fa-effe-470c-bb25-f144f1e21248-kube-api-access-p4684\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.721823 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35511fa-effe-470c-bb25-f144f1e21248-utilities\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.721847 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35511fa-effe-470c-bb25-f144f1e21248-catalog-content\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.722363 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35511fa-effe-470c-bb25-f144f1e21248-catalog-content\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.722473 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35511fa-effe-470c-bb25-f144f1e21248-utilities\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.742910 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4684\" (UniqueName: \"kubernetes.io/projected/e35511fa-effe-470c-bb25-f144f1e21248-kube-api-access-p4684\") pod \"redhat-marketplace-w7bqh\" (UID: \"e35511fa-effe-470c-bb25-f144f1e21248\") " pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:01 crc kubenswrapper[4889]: I1128 07:03:01.864433 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.099637 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7bqh"] Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.367408 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6pv9d"] Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.368156 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.371593 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.371622 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.371668 4889 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-k7q5m" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.375848 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7bqh" event={"ID":"e35511fa-effe-470c-bb25-f144f1e21248","Type":"ContainerStarted","Data":"f125b7ad1a4caf020b11e40bf2ce5abe5210681858d4951a7dc81e680a472380"} Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.376309 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6pv9d"] Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.433164 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrn6r\" (UniqueName: \"kubernetes.io/projected/9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5-kube-api-access-lrn6r\") pod \"cert-manager-webhook-f4fb5df64-6pv9d\" (UID: \"9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.433345 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6pv9d\" (UID: \"9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.521639 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-9cclj"] Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.522338 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.524761 4889 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-gf284" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.534549 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6pv9d\" (UID: \"9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.534658 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrn6r\" (UniqueName: \"kubernetes.io/projected/9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5-kube-api-access-lrn6r\") pod \"cert-manager-webhook-f4fb5df64-6pv9d\" (UID: \"9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.536227 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-9cclj"] Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.553982 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6pv9d\" (UID: \"9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.562813 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrn6r\" (UniqueName: \"kubernetes.io/projected/9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5-kube-api-access-lrn6r\") pod \"cert-manager-webhook-f4fb5df64-6pv9d\" (UID: \"9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.636300 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd5092c5-0e74-4f68-a2cd-033dc52f1e01-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-9cclj\" (UID: \"bd5092c5-0e74-4f68-a2cd-033dc52f1e01\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.636400 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mn4b\" (UniqueName: \"kubernetes.io/projected/bd5092c5-0e74-4f68-a2cd-033dc52f1e01-kube-api-access-4mn4b\") pod \"cert-manager-cainjector-855d9ccff4-9cclj\" (UID: \"bd5092c5-0e74-4f68-a2cd-033dc52f1e01\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.681343 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.737372 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mn4b\" (UniqueName: \"kubernetes.io/projected/bd5092c5-0e74-4f68-a2cd-033dc52f1e01-kube-api-access-4mn4b\") pod \"cert-manager-cainjector-855d9ccff4-9cclj\" (UID: \"bd5092c5-0e74-4f68-a2cd-033dc52f1e01\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.737441 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd5092c5-0e74-4f68-a2cd-033dc52f1e01-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-9cclj\" (UID: \"bd5092c5-0e74-4f68-a2cd-033dc52f1e01\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.761867 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mn4b\" (UniqueName: \"kubernetes.io/projected/bd5092c5-0e74-4f68-a2cd-033dc52f1e01-kube-api-access-4mn4b\") pod \"cert-manager-cainjector-855d9ccff4-9cclj\" (UID: \"bd5092c5-0e74-4f68-a2cd-033dc52f1e01\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.766801 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd5092c5-0e74-4f68-a2cd-033dc52f1e01-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-9cclj\" (UID: \"bd5092c5-0e74-4f68-a2cd-033dc52f1e01\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.837032 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" Nov 28 07:03:02 crc kubenswrapper[4889]: I1128 07:03:02.881191 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6pv9d"] Nov 28 07:03:02 crc kubenswrapper[4889]: W1128 07:03:02.886325 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ff59aa7_f908_4b9d_bbfd_7e8bedd07ee5.slice/crio-41cb3356b1133523b8a1dcf038db7aa090d797016d760f07e6d02e1dbeaf4a80 WatchSource:0}: Error finding container 41cb3356b1133523b8a1dcf038db7aa090d797016d760f07e6d02e1dbeaf4a80: Status 404 returned error can't find the container with id 41cb3356b1133523b8a1dcf038db7aa090d797016d760f07e6d02e1dbeaf4a80 Nov 28 07:03:03 crc kubenswrapper[4889]: I1128 07:03:03.078320 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-9cclj"] Nov 28 07:03:03 crc kubenswrapper[4889]: W1128 07:03:03.082422 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd5092c5_0e74_4f68_a2cd_033dc52f1e01.slice/crio-85e3d9ff8a3bce7edcb46bc9a7de5d5356b4813be9199735b0eb01632296e100 WatchSource:0}: Error finding container 85e3d9ff8a3bce7edcb46bc9a7de5d5356b4813be9199735b0eb01632296e100: Status 404 returned error can't find the container with id 85e3d9ff8a3bce7edcb46bc9a7de5d5356b4813be9199735b0eb01632296e100 Nov 28 07:03:03 crc kubenswrapper[4889]: I1128 07:03:03.381497 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" event={"ID":"bd5092c5-0e74-4f68-a2cd-033dc52f1e01","Type":"ContainerStarted","Data":"85e3d9ff8a3bce7edcb46bc9a7de5d5356b4813be9199735b0eb01632296e100"} Nov 28 07:03:03 crc kubenswrapper[4889]: I1128 07:03:03.386213 4889 generic.go:334] "Generic (PLEG): container finished" podID="e35511fa-effe-470c-bb25-f144f1e21248" containerID="1dd4fa50dd4b7dff100bbddafead8846cb587ea3faa2fd586e1c1f299edcde40" exitCode=0 Nov 28 07:03:03 crc kubenswrapper[4889]: I1128 07:03:03.386302 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7bqh" event={"ID":"e35511fa-effe-470c-bb25-f144f1e21248","Type":"ContainerDied","Data":"1dd4fa50dd4b7dff100bbddafead8846cb587ea3faa2fd586e1c1f299edcde40"} Nov 28 07:03:03 crc kubenswrapper[4889]: I1128 07:03:03.387557 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" event={"ID":"9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5","Type":"ContainerStarted","Data":"41cb3356b1133523b8a1dcf038db7aa090d797016d760f07e6d02e1dbeaf4a80"} Nov 28 07:03:04 crc kubenswrapper[4889]: I1128 07:03:04.949887 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-98vrt"] Nov 28 07:03:04 crc kubenswrapper[4889]: I1128 07:03:04.952835 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:04 crc kubenswrapper[4889]: I1128 07:03:04.960149 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98vrt"] Nov 28 07:03:04 crc kubenswrapper[4889]: I1128 07:03:04.966085 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-catalog-content\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:04 crc kubenswrapper[4889]: I1128 07:03:04.966482 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-utilities\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:04 crc kubenswrapper[4889]: I1128 07:03:04.967215 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tslc2\" (UniqueName: \"kubernetes.io/projected/9ae06f09-e814-45ae-96c8-56939e0dfff9-kube-api-access-tslc2\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:05 crc kubenswrapper[4889]: I1128 07:03:05.069193 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-utilities\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:05 crc kubenswrapper[4889]: I1128 07:03:05.069271 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tslc2\" (UniqueName: \"kubernetes.io/projected/9ae06f09-e814-45ae-96c8-56939e0dfff9-kube-api-access-tslc2\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:05 crc kubenswrapper[4889]: I1128 07:03:05.069302 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-catalog-content\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:05 crc kubenswrapper[4889]: I1128 07:03:05.069945 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-utilities\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:05 crc kubenswrapper[4889]: I1128 07:03:05.076680 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-catalog-content\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:05 crc kubenswrapper[4889]: I1128 07:03:05.091571 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tslc2\" (UniqueName: \"kubernetes.io/projected/9ae06f09-e814-45ae-96c8-56939e0dfff9-kube-api-access-tslc2\") pod \"certified-operators-98vrt\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:05 crc kubenswrapper[4889]: I1128 07:03:05.285077 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:05 crc kubenswrapper[4889]: I1128 07:03:05.787800 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98vrt"] Nov 28 07:03:06 crc kubenswrapper[4889]: I1128 07:03:06.415055 4889 generic.go:334] "Generic (PLEG): container finished" podID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerID="c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6" exitCode=0 Nov 28 07:03:06 crc kubenswrapper[4889]: I1128 07:03:06.415101 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98vrt" event={"ID":"9ae06f09-e814-45ae-96c8-56939e0dfff9","Type":"ContainerDied","Data":"c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6"} Nov 28 07:03:06 crc kubenswrapper[4889]: I1128 07:03:06.415132 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98vrt" event={"ID":"9ae06f09-e814-45ae-96c8-56939e0dfff9","Type":"ContainerStarted","Data":"9e830edf6605dbb2915fed8d6924a6b7e66c8fadaab362d17715124b8c358f31"} Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.163195 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-fzbjf"] Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.166563 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-fzbjf" Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.168934 4889 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5nvtk" Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.171584 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-fzbjf"] Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.241598 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0220baa2-0242-482e-a078-e466f273d0f0-bound-sa-token\") pod \"cert-manager-86cb77c54b-fzbjf\" (UID: \"0220baa2-0242-482e-a078-e466f273d0f0\") " pod="cert-manager/cert-manager-86cb77c54b-fzbjf" Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.241702 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szmdf\" (UniqueName: \"kubernetes.io/projected/0220baa2-0242-482e-a078-e466f273d0f0-kube-api-access-szmdf\") pod \"cert-manager-86cb77c54b-fzbjf\" (UID: \"0220baa2-0242-482e-a078-e466f273d0f0\") " pod="cert-manager/cert-manager-86cb77c54b-fzbjf" Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.343928 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szmdf\" (UniqueName: \"kubernetes.io/projected/0220baa2-0242-482e-a078-e466f273d0f0-kube-api-access-szmdf\") pod \"cert-manager-86cb77c54b-fzbjf\" (UID: \"0220baa2-0242-482e-a078-e466f273d0f0\") " pod="cert-manager/cert-manager-86cb77c54b-fzbjf" Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.344057 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0220baa2-0242-482e-a078-e466f273d0f0-bound-sa-token\") pod \"cert-manager-86cb77c54b-fzbjf\" (UID: \"0220baa2-0242-482e-a078-e466f273d0f0\") " pod="cert-manager/cert-manager-86cb77c54b-fzbjf" Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.369363 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szmdf\" (UniqueName: \"kubernetes.io/projected/0220baa2-0242-482e-a078-e466f273d0f0-kube-api-access-szmdf\") pod \"cert-manager-86cb77c54b-fzbjf\" (UID: \"0220baa2-0242-482e-a078-e466f273d0f0\") " pod="cert-manager/cert-manager-86cb77c54b-fzbjf" Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.389503 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0220baa2-0242-482e-a078-e466f273d0f0-bound-sa-token\") pod \"cert-manager-86cb77c54b-fzbjf\" (UID: \"0220baa2-0242-482e-a078-e466f273d0f0\") " pod="cert-manager/cert-manager-86cb77c54b-fzbjf" Nov 28 07:03:09 crc kubenswrapper[4889]: I1128 07:03:09.497106 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-fzbjf" Nov 28 07:03:10 crc kubenswrapper[4889]: I1128 07:03:10.134369 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:03:13 crc kubenswrapper[4889]: I1128 07:03:13.545600 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbbbx"] Nov 28 07:03:13 crc kubenswrapper[4889]: I1128 07:03:13.546247 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qbbbx" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerName="registry-server" containerID="cri-o://5b50f8f4f27ba402cb57b3f939805cd89827a56bf1b42edf9e3ba6886c305108" gracePeriod=2 Nov 28 07:03:15 crc kubenswrapper[4889]: I1128 07:03:15.467368 4889 generic.go:334] "Generic (PLEG): container finished" podID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerID="5b50f8f4f27ba402cb57b3f939805cd89827a56bf1b42edf9e3ba6886c305108" exitCode=0 Nov 28 07:03:15 crc kubenswrapper[4889]: I1128 07:03:15.467564 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbbbx" event={"ID":"6834453e-9d05-4475-b9b5-332c2f2a07ad","Type":"ContainerDied","Data":"5b50f8f4f27ba402cb57b3f939805cd89827a56bf1b42edf9e3ba6886c305108"} Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.343855 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.454644 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-utilities\") pod \"6834453e-9d05-4475-b9b5-332c2f2a07ad\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.454732 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-catalog-content\") pod \"6834453e-9d05-4475-b9b5-332c2f2a07ad\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.454770 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9mks\" (UniqueName: \"kubernetes.io/projected/6834453e-9d05-4475-b9b5-332c2f2a07ad-kube-api-access-f9mks\") pod \"6834453e-9d05-4475-b9b5-332c2f2a07ad\" (UID: \"6834453e-9d05-4475-b9b5-332c2f2a07ad\") " Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.455904 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-utilities" (OuterVolumeSpecName: "utilities") pod "6834453e-9d05-4475-b9b5-332c2f2a07ad" (UID: "6834453e-9d05-4475-b9b5-332c2f2a07ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.463397 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6834453e-9d05-4475-b9b5-332c2f2a07ad-kube-api-access-f9mks" (OuterVolumeSpecName: "kube-api-access-f9mks") pod "6834453e-9d05-4475-b9b5-332c2f2a07ad" (UID: "6834453e-9d05-4475-b9b5-332c2f2a07ad"). InnerVolumeSpecName "kube-api-access-f9mks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.486870 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbbbx" event={"ID":"6834453e-9d05-4475-b9b5-332c2f2a07ad","Type":"ContainerDied","Data":"de65249a874d2a2a95f278ab5afef779dec7cf245c11053e2dfaa4c260a39f6a"} Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.486920 4889 scope.go:117] "RemoveContainer" containerID="5b50f8f4f27ba402cb57b3f939805cd89827a56bf1b42edf9e3ba6886c305108" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.486920 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbbbx" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.507464 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6834453e-9d05-4475-b9b5-332c2f2a07ad" (UID: "6834453e-9d05-4475-b9b5-332c2f2a07ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.517988 4889 scope.go:117] "RemoveContainer" containerID="16afad3b986cc298338fa637a4ff1dc371069dd54a4dc51229a575a61fd56251" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.535974 4889 scope.go:117] "RemoveContainer" containerID="03dfe7b55f450ab922e5dac33f787088b80b42b97d6e03cfc423a2822a6e6d8d" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.556136 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.556163 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6834453e-9d05-4475-b9b5-332c2f2a07ad-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.556176 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9mks\" (UniqueName: \"kubernetes.io/projected/6834453e-9d05-4475-b9b5-332c2f2a07ad-kube-api-access-f9mks\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.686656 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-fzbjf"] Nov 28 07:03:17 crc kubenswrapper[4889]: W1128 07:03:17.735338 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0220baa2_0242_482e_a078_e466f273d0f0.slice/crio-7540da35e1fc0df701f015dd7816e982c83230708517d15603d379c7ba0e4f02 WatchSource:0}: Error finding container 7540da35e1fc0df701f015dd7816e982c83230708517d15603d379c7ba0e4f02: Status 404 returned error can't find the container with id 7540da35e1fc0df701f015dd7816e982c83230708517d15603d379c7ba0e4f02 Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.901346 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbbbx"] Nov 28 07:03:17 crc kubenswrapper[4889]: I1128 07:03:17.905223 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qbbbx"] Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.499280 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" event={"ID":"9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5","Type":"ContainerStarted","Data":"eada3e29761bdebf836959c96424f90e9d1d5d4d6e32233d7d54f38f0725b4d6"} Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.499683 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.501107 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-fzbjf" event={"ID":"0220baa2-0242-482e-a078-e466f273d0f0","Type":"ContainerStarted","Data":"c25b9ac938d56d3ab4c765010116cb2ec2d77806a34ff71a6b77de0ec3ff37bd"} Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.501293 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-fzbjf" event={"ID":"0220baa2-0242-482e-a078-e466f273d0f0","Type":"ContainerStarted","Data":"7540da35e1fc0df701f015dd7816e982c83230708517d15603d379c7ba0e4f02"} Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.502734 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" event={"ID":"bd5092c5-0e74-4f68-a2cd-033dc52f1e01","Type":"ContainerStarted","Data":"ce561ee70a90052889a6af2ca93bfa2fb112c2c46ef4356245b9d31c106dce64"} Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.504666 4889 generic.go:334] "Generic (PLEG): container finished" podID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerID="79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397" exitCode=0 Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.504738 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98vrt" event={"ID":"9ae06f09-e814-45ae-96c8-56939e0dfff9","Type":"ContainerDied","Data":"79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397"} Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.506631 4889 generic.go:334] "Generic (PLEG): container finished" podID="e35511fa-effe-470c-bb25-f144f1e21248" containerID="137a496ff03c359e9d19097ca1944f409a424c862bed3624132b59cf4417c54b" exitCode=0 Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.506765 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7bqh" event={"ID":"e35511fa-effe-470c-bb25-f144f1e21248","Type":"ContainerDied","Data":"137a496ff03c359e9d19097ca1944f409a424c862bed3624132b59cf4417c54b"} Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.523304 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" podStartSLOduration=2.000498246 podStartE2EDuration="16.523285732s" podCreationTimestamp="2025-11-28 07:03:02 +0000 UTC" firstStartedPulling="2025-11-28 07:03:02.889315639 +0000 UTC m=+905.859549794" lastFinishedPulling="2025-11-28 07:03:17.412103125 +0000 UTC m=+920.382337280" observedRunningTime="2025-11-28 07:03:18.517996615 +0000 UTC m=+921.488230770" watchObservedRunningTime="2025-11-28 07:03:18.523285732 +0000 UTC m=+921.493519887" Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.568051 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-fzbjf" podStartSLOduration=9.5680326 podStartE2EDuration="9.5680326s" podCreationTimestamp="2025-11-28 07:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:03:18.562495256 +0000 UTC m=+921.532729411" watchObservedRunningTime="2025-11-28 07:03:18.5680326 +0000 UTC m=+921.538266755" Nov 28 07:03:18 crc kubenswrapper[4889]: I1128 07:03:18.611966 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9cclj" podStartSLOduration=2.284166817 podStartE2EDuration="16.611943576s" podCreationTimestamp="2025-11-28 07:03:02 +0000 UTC" firstStartedPulling="2025-11-28 07:03:03.084336216 +0000 UTC m=+906.054570371" lastFinishedPulling="2025-11-28 07:03:17.412112965 +0000 UTC m=+920.382347130" observedRunningTime="2025-11-28 07:03:18.609191235 +0000 UTC m=+921.579425390" watchObservedRunningTime="2025-11-28 07:03:18.611943576 +0000 UTC m=+921.582177731" Nov 28 07:03:19 crc kubenswrapper[4889]: I1128 07:03:19.339798 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" path="/var/lib/kubelet/pods/6834453e-9d05-4475-b9b5-332c2f2a07ad/volumes" Nov 28 07:03:19 crc kubenswrapper[4889]: I1128 07:03:19.517828 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98vrt" event={"ID":"9ae06f09-e814-45ae-96c8-56939e0dfff9","Type":"ContainerStarted","Data":"5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1"} Nov 28 07:03:19 crc kubenswrapper[4889]: I1128 07:03:19.519811 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7bqh" event={"ID":"e35511fa-effe-470c-bb25-f144f1e21248","Type":"ContainerStarted","Data":"d0d11e85305b3ddf345c68d531bfacb11ad29f35c2eac274236bb969a67b8883"} Nov 28 07:03:19 crc kubenswrapper[4889]: I1128 07:03:19.539567 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-98vrt" podStartSLOduration=4.422483353 podStartE2EDuration="15.539551382s" podCreationTimestamp="2025-11-28 07:03:04 +0000 UTC" firstStartedPulling="2025-11-28 07:03:08.086391635 +0000 UTC m=+911.056625790" lastFinishedPulling="2025-11-28 07:03:19.203459664 +0000 UTC m=+922.173693819" observedRunningTime="2025-11-28 07:03:19.53562348 +0000 UTC m=+922.505857635" watchObservedRunningTime="2025-11-28 07:03:19.539551382 +0000 UTC m=+922.509785537" Nov 28 07:03:19 crc kubenswrapper[4889]: I1128 07:03:19.553175 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7bqh" podStartSLOduration=2.807506981 podStartE2EDuration="18.553158624s" podCreationTimestamp="2025-11-28 07:03:01 +0000 UTC" firstStartedPulling="2025-11-28 07:03:03.388544259 +0000 UTC m=+906.358778414" lastFinishedPulling="2025-11-28 07:03:19.134195902 +0000 UTC m=+922.104430057" observedRunningTime="2025-11-28 07:03:19.550625909 +0000 UTC m=+922.520860064" watchObservedRunningTime="2025-11-28 07:03:19.553158624 +0000 UTC m=+922.523392779" Nov 28 07:03:21 crc kubenswrapper[4889]: I1128 07:03:21.865578 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:21 crc kubenswrapper[4889]: I1128 07:03:21.866178 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:21 crc kubenswrapper[4889]: I1128 07:03:21.905232 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:22 crc kubenswrapper[4889]: I1128 07:03:22.684638 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-6pv9d" Nov 28 07:03:25 crc kubenswrapper[4889]: I1128 07:03:25.286247 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:25 crc kubenswrapper[4889]: I1128 07:03:25.286893 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:25 crc kubenswrapper[4889]: I1128 07:03:25.341812 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:25 crc kubenswrapper[4889]: I1128 07:03:25.586249 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:29 crc kubenswrapper[4889]: I1128 07:03:29.343968 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-98vrt"] Nov 28 07:03:29 crc kubenswrapper[4889]: I1128 07:03:29.344223 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-98vrt" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerName="registry-server" containerID="cri-o://5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1" gracePeriod=2 Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.274057 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.421031 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tslc2\" (UniqueName: \"kubernetes.io/projected/9ae06f09-e814-45ae-96c8-56939e0dfff9-kube-api-access-tslc2\") pod \"9ae06f09-e814-45ae-96c8-56939e0dfff9\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.421108 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-catalog-content\") pod \"9ae06f09-e814-45ae-96c8-56939e0dfff9\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.421161 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-utilities\") pod \"9ae06f09-e814-45ae-96c8-56939e0dfff9\" (UID: \"9ae06f09-e814-45ae-96c8-56939e0dfff9\") " Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.422132 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-utilities" (OuterVolumeSpecName: "utilities") pod "9ae06f09-e814-45ae-96c8-56939e0dfff9" (UID: "9ae06f09-e814-45ae-96c8-56939e0dfff9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.426346 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae06f09-e814-45ae-96c8-56939e0dfff9-kube-api-access-tslc2" (OuterVolumeSpecName: "kube-api-access-tslc2") pod "9ae06f09-e814-45ae-96c8-56939e0dfff9" (UID: "9ae06f09-e814-45ae-96c8-56939e0dfff9"). InnerVolumeSpecName "kube-api-access-tslc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.468009 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ae06f09-e814-45ae-96c8-56939e0dfff9" (UID: "9ae06f09-e814-45ae-96c8-56939e0dfff9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.523133 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tslc2\" (UniqueName: \"kubernetes.io/projected/9ae06f09-e814-45ae-96c8-56939e0dfff9-kube-api-access-tslc2\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.523170 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.523180 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae06f09-e814-45ae-96c8-56939e0dfff9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.590948 4889 generic.go:334] "Generic (PLEG): container finished" podID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerID="5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1" exitCode=0 Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.591028 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98vrt" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.591034 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98vrt" event={"ID":"9ae06f09-e814-45ae-96c8-56939e0dfff9","Type":"ContainerDied","Data":"5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1"} Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.591177 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98vrt" event={"ID":"9ae06f09-e814-45ae-96c8-56939e0dfff9","Type":"ContainerDied","Data":"9e830edf6605dbb2915fed8d6924a6b7e66c8fadaab362d17715124b8c358f31"} Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.591200 4889 scope.go:117] "RemoveContainer" containerID="5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1" Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.620759 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-98vrt"] Nov 28 07:03:30 crc kubenswrapper[4889]: I1128 07:03:30.625587 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-98vrt"] Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.027128 4889 scope.go:117] "RemoveContainer" containerID="79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.043086 4889 scope.go:117] "RemoveContainer" containerID="c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.065833 4889 scope.go:117] "RemoveContainer" containerID="5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1" Nov 28 07:03:31 crc kubenswrapper[4889]: E1128 07:03:31.066285 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1\": container with ID starting with 5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1 not found: ID does not exist" containerID="5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.066320 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1"} err="failed to get container status \"5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1\": rpc error: code = NotFound desc = could not find container \"5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1\": container with ID starting with 5577bc32f3d34b68968f31ddfe9d95145e77e280ff9274f4fd212d60572b06d1 not found: ID does not exist" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.066341 4889 scope.go:117] "RemoveContainer" containerID="79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397" Nov 28 07:03:31 crc kubenswrapper[4889]: E1128 07:03:31.066664 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397\": container with ID starting with 79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397 not found: ID does not exist" containerID="79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.066699 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397"} err="failed to get container status \"79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397\": rpc error: code = NotFound desc = could not find container \"79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397\": container with ID starting with 79655c48d81e623f4e0d280d12063e12f70e314f346c955708d7b9b0d83ab397 not found: ID does not exist" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.066726 4889 scope.go:117] "RemoveContainer" containerID="c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6" Nov 28 07:03:31 crc kubenswrapper[4889]: E1128 07:03:31.066983 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6\": container with ID starting with c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6 not found: ID does not exist" containerID="c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.067014 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6"} err="failed to get container status \"c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6\": rpc error: code = NotFound desc = could not find container \"c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6\": container with ID starting with c53c9e897b294c2e1094a6baf2ac9733f1de6894c0e071ea8ccad7ecce40e6d6 not found: ID does not exist" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.346271 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" path="/var/lib/kubelet/pods/9ae06f09-e814-45ae-96c8-56939e0dfff9/volumes" Nov 28 07:03:31 crc kubenswrapper[4889]: I1128 07:03:31.910323 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7bqh" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.361027 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jvv8f"] Nov 28 07:03:34 crc kubenswrapper[4889]: E1128 07:03:34.361741 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerName="extract-utilities" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.361762 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerName="extract-utilities" Nov 28 07:03:34 crc kubenswrapper[4889]: E1128 07:03:34.361782 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerName="extract-content" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.361792 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerName="extract-content" Nov 28 07:03:34 crc kubenswrapper[4889]: E1128 07:03:34.361814 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerName="extract-content" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.361824 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerName="extract-content" Nov 28 07:03:34 crc kubenswrapper[4889]: E1128 07:03:34.361845 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerName="extract-utilities" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.361855 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerName="extract-utilities" Nov 28 07:03:34 crc kubenswrapper[4889]: E1128 07:03:34.361872 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerName="registry-server" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.361881 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerName="registry-server" Nov 28 07:03:34 crc kubenswrapper[4889]: E1128 07:03:34.361892 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerName="registry-server" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.361902 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerName="registry-server" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.362088 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6834453e-9d05-4475-b9b5-332c2f2a07ad" containerName="registry-server" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.362113 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae06f09-e814-45ae-96c8-56939e0dfff9" containerName="registry-server" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.362726 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.366681 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.366940 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-bfsnz" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.367100 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.375681 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jvv8f"] Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.477284 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dkzw\" (UniqueName: \"kubernetes.io/projected/70cdc5d2-6373-4fe4-9b35-fcabe3f1853c-kube-api-access-6dkzw\") pod \"openstack-operator-index-jvv8f\" (UID: \"70cdc5d2-6373-4fe4-9b35-fcabe3f1853c\") " pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.579043 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dkzw\" (UniqueName: \"kubernetes.io/projected/70cdc5d2-6373-4fe4-9b35-fcabe3f1853c-kube-api-access-6dkzw\") pod \"openstack-operator-index-jvv8f\" (UID: \"70cdc5d2-6373-4fe4-9b35-fcabe3f1853c\") " pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.597509 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dkzw\" (UniqueName: \"kubernetes.io/projected/70cdc5d2-6373-4fe4-9b35-fcabe3f1853c-kube-api-access-6dkzw\") pod \"openstack-operator-index-jvv8f\" (UID: \"70cdc5d2-6373-4fe4-9b35-fcabe3f1853c\") " pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.722828 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:34 crc kubenswrapper[4889]: I1128 07:03:34.962553 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jvv8f"] Nov 28 07:03:34 crc kubenswrapper[4889]: W1128 07:03:34.966200 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70cdc5d2_6373_4fe4_9b35_fcabe3f1853c.slice/crio-d6e301b87a3255a0e59847e60ed1a55a9c5046984e9b3741f6b37995c678d16e WatchSource:0}: Error finding container d6e301b87a3255a0e59847e60ed1a55a9c5046984e9b3741f6b37995c678d16e: Status 404 returned error can't find the container with id d6e301b87a3255a0e59847e60ed1a55a9c5046984e9b3741f6b37995c678d16e Nov 28 07:03:35 crc kubenswrapper[4889]: I1128 07:03:35.622509 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jvv8f" event={"ID":"70cdc5d2-6373-4fe4-9b35-fcabe3f1853c","Type":"ContainerStarted","Data":"d6e301b87a3255a0e59847e60ed1a55a9c5046984e9b3741f6b37995c678d16e"} Nov 28 07:03:39 crc kubenswrapper[4889]: I1128 07:03:39.168544 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7bqh"] Nov 28 07:03:39 crc kubenswrapper[4889]: I1128 07:03:39.942535 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6kxh"] Nov 28 07:03:39 crc kubenswrapper[4889]: I1128 07:03:39.942793 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q6kxh" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerName="registry-server" containerID="cri-o://b7adce3d0d2d02f029f3298d01c0fb6823ea530fe12bf6751c7df7cfe4449023" gracePeriod=2 Nov 28 07:03:40 crc kubenswrapper[4889]: I1128 07:03:40.658669 4889 generic.go:334] "Generic (PLEG): container finished" podID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerID="b7adce3d0d2d02f029f3298d01c0fb6823ea530fe12bf6751c7df7cfe4449023" exitCode=0 Nov 28 07:03:40 crc kubenswrapper[4889]: I1128 07:03:40.658724 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6kxh" event={"ID":"aa42c040-43a1-475c-99d0-f7bb57a22a74","Type":"ContainerDied","Data":"b7adce3d0d2d02f029f3298d01c0fb6823ea530fe12bf6751c7df7cfe4449023"} Nov 28 07:03:43 crc kubenswrapper[4889]: I1128 07:03:43.873373 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 07:03:43 crc kubenswrapper[4889]: I1128 07:03:43.919518 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvsll\" (UniqueName: \"kubernetes.io/projected/aa42c040-43a1-475c-99d0-f7bb57a22a74-kube-api-access-jvsll\") pod \"aa42c040-43a1-475c-99d0-f7bb57a22a74\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " Nov 28 07:03:43 crc kubenswrapper[4889]: I1128 07:03:43.919667 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-catalog-content\") pod \"aa42c040-43a1-475c-99d0-f7bb57a22a74\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " Nov 28 07:03:43 crc kubenswrapper[4889]: I1128 07:03:43.919699 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-utilities\") pod \"aa42c040-43a1-475c-99d0-f7bb57a22a74\" (UID: \"aa42c040-43a1-475c-99d0-f7bb57a22a74\") " Nov 28 07:03:43 crc kubenswrapper[4889]: I1128 07:03:43.920573 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-utilities" (OuterVolumeSpecName: "utilities") pod "aa42c040-43a1-475c-99d0-f7bb57a22a74" (UID: "aa42c040-43a1-475c-99d0-f7bb57a22a74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:03:43 crc kubenswrapper[4889]: I1128 07:03:43.924410 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa42c040-43a1-475c-99d0-f7bb57a22a74-kube-api-access-jvsll" (OuterVolumeSpecName: "kube-api-access-jvsll") pod "aa42c040-43a1-475c-99d0-f7bb57a22a74" (UID: "aa42c040-43a1-475c-99d0-f7bb57a22a74"). InnerVolumeSpecName "kube-api-access-jvsll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:03:43 crc kubenswrapper[4889]: I1128 07:03:43.937139 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa42c040-43a1-475c-99d0-f7bb57a22a74" (UID: "aa42c040-43a1-475c-99d0-f7bb57a22a74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.021336 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.021381 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42c040-43a1-475c-99d0-f7bb57a22a74-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.021393 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvsll\" (UniqueName: \"kubernetes.io/projected/aa42c040-43a1-475c-99d0-f7bb57a22a74-kube-api-access-jvsll\") on node \"crc\" DevicePath \"\"" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.682269 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6kxh" event={"ID":"aa42c040-43a1-475c-99d0-f7bb57a22a74","Type":"ContainerDied","Data":"e07450ffd5f0c1f38663628cd067cb6db2b062cf18cbc33809dbedbaf08cd99c"} Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.682334 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6kxh" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.682591 4889 scope.go:117] "RemoveContainer" containerID="b7adce3d0d2d02f029f3298d01c0fb6823ea530fe12bf6751c7df7cfe4449023" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.684135 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jvv8f" event={"ID":"70cdc5d2-6373-4fe4-9b35-fcabe3f1853c","Type":"ContainerStarted","Data":"034af6e58ec42ed425a29d7d3b2c869e4598f4226b21a0d765f10863bae44e1d"} Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.702784 4889 scope.go:117] "RemoveContainer" containerID="6bdc26c1fc36275701cee2dcc4083e4ca77bc0d19e219b12b6baf96522d6afd3" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.711182 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jvv8f" podStartSLOduration=1.613528703 podStartE2EDuration="10.711154822s" podCreationTimestamp="2025-11-28 07:03:34 +0000 UTC" firstStartedPulling="2025-11-28 07:03:34.968831258 +0000 UTC m=+937.939065423" lastFinishedPulling="2025-11-28 07:03:44.066457377 +0000 UTC m=+947.036691542" observedRunningTime="2025-11-28 07:03:44.699584362 +0000 UTC m=+947.669818547" watchObservedRunningTime="2025-11-28 07:03:44.711154822 +0000 UTC m=+947.681389017" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.719348 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6kxh"] Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.721211 4889 scope.go:117] "RemoveContainer" containerID="0860e9f35ddbf04eeb270b0193aaef8fab7cb2c2ce5c97dc303cc39b91d785e0" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.723225 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.723256 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.725842 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6kxh"] Nov 28 07:03:44 crc kubenswrapper[4889]: I1128 07:03:44.746611 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:45 crc kubenswrapper[4889]: I1128 07:03:45.351487 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" path="/var/lib/kubelet/pods/aa42c040-43a1-475c-99d0-f7bb57a22a74/volumes" Nov 28 07:03:54 crc kubenswrapper[4889]: I1128 07:03:54.754300 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-jvv8f" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.791202 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4"] Nov 28 07:03:58 crc kubenswrapper[4889]: E1128 07:03:58.792052 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerName="extract-content" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.792067 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerName="extract-content" Nov 28 07:03:58 crc kubenswrapper[4889]: E1128 07:03:58.792081 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerName="registry-server" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.792089 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerName="registry-server" Nov 28 07:03:58 crc kubenswrapper[4889]: E1128 07:03:58.792104 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerName="extract-utilities" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.792114 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerName="extract-utilities" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.792265 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa42c040-43a1-475c-99d0-f7bb57a22a74" containerName="registry-server" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.793327 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.796345 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4"] Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.796615 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mldk7" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.906087 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pz2q\" (UniqueName: \"kubernetes.io/projected/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-kube-api-access-5pz2q\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.906275 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-util\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:58 crc kubenswrapper[4889]: I1128 07:03:58.906511 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-bundle\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.008034 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-util\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.008160 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-bundle\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.008223 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pz2q\" (UniqueName: \"kubernetes.io/projected/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-kube-api-access-5pz2q\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.008566 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-util\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.008686 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-bundle\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.031871 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pz2q\" (UniqueName: \"kubernetes.io/projected/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-kube-api-access-5pz2q\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.109533 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.551315 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4"] Nov 28 07:03:59 crc kubenswrapper[4889]: W1128 07:03:59.559349 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8cecfd2_e3f3_46e2_aa8d_a4bab90a07db.slice/crio-343d4c5af7d66769ad464201749a4c24de4f09f7c4e10fa917432f3be4c31ebc WatchSource:0}: Error finding container 343d4c5af7d66769ad464201749a4c24de4f09f7c4e10fa917432f3be4c31ebc: Status 404 returned error can't find the container with id 343d4c5af7d66769ad464201749a4c24de4f09f7c4e10fa917432f3be4c31ebc Nov 28 07:03:59 crc kubenswrapper[4889]: I1128 07:03:59.781858 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" event={"ID":"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db","Type":"ContainerStarted","Data":"343d4c5af7d66769ad464201749a4c24de4f09f7c4e10fa917432f3be4c31ebc"} Nov 28 07:04:00 crc kubenswrapper[4889]: I1128 07:04:00.790765 4889 generic.go:334] "Generic (PLEG): container finished" podID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerID="8c1ebe292866a716e0a71ab647fd81548ce625ae667fd4119e84353a32117eb2" exitCode=0 Nov 28 07:04:00 crc kubenswrapper[4889]: I1128 07:04:00.790853 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" event={"ID":"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db","Type":"ContainerDied","Data":"8c1ebe292866a716e0a71ab647fd81548ce625ae667fd4119e84353a32117eb2"} Nov 28 07:04:02 crc kubenswrapper[4889]: I1128 07:04:02.805032 4889 generic.go:334] "Generic (PLEG): container finished" podID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerID="c3001286fa27733005599a4caaf411369e5549277e04e5a60be9bb299dd22201" exitCode=0 Nov 28 07:04:02 crc kubenswrapper[4889]: I1128 07:04:02.805111 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" event={"ID":"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db","Type":"ContainerDied","Data":"c3001286fa27733005599a4caaf411369e5549277e04e5a60be9bb299dd22201"} Nov 28 07:04:03 crc kubenswrapper[4889]: I1128 07:04:03.813895 4889 generic.go:334] "Generic (PLEG): container finished" podID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerID="b72fd8115dec19c4da49f805bb6567b5189e11b1e8eb16df0d70d97651f864bf" exitCode=0 Nov 28 07:04:03 crc kubenswrapper[4889]: I1128 07:04:03.813947 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" event={"ID":"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db","Type":"ContainerDied","Data":"b72fd8115dec19c4da49f805bb6567b5189e11b1e8eb16df0d70d97651f864bf"} Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.037615 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.128462 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pz2q\" (UniqueName: \"kubernetes.io/projected/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-kube-api-access-5pz2q\") pod \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.128537 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-bundle\") pod \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.128562 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-util\") pod \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\" (UID: \"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db\") " Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.129419 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-bundle" (OuterVolumeSpecName: "bundle") pod "d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" (UID: "d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.133561 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-kube-api-access-5pz2q" (OuterVolumeSpecName: "kube-api-access-5pz2q") pod "d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" (UID: "d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db"). InnerVolumeSpecName "kube-api-access-5pz2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.230251 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pz2q\" (UniqueName: \"kubernetes.io/projected/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-kube-api-access-5pz2q\") on node \"crc\" DevicePath \"\"" Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.230292 4889 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.382446 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-util" (OuterVolumeSpecName: "util") pod "d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" (UID: "d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.432398 4889 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db-util\") on node \"crc\" DevicePath \"\"" Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.828798 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" event={"ID":"d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db","Type":"ContainerDied","Data":"343d4c5af7d66769ad464201749a4c24de4f09f7c4e10fa917432f3be4c31ebc"} Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.828852 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="343d4c5af7d66769ad464201749a4c24de4f09f7c4e10fa917432f3be4c31ebc" Nov 28 07:04:05 crc kubenswrapper[4889]: I1128 07:04:05.828858 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4" Nov 28 07:04:10 crc kubenswrapper[4889]: I1128 07:04:10.933377 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m"] Nov 28 07:04:10 crc kubenswrapper[4889]: E1128 07:04:10.934097 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerName="extract" Nov 28 07:04:10 crc kubenswrapper[4889]: I1128 07:04:10.934108 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerName="extract" Nov 28 07:04:10 crc kubenswrapper[4889]: E1128 07:04:10.934126 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerName="util" Nov 28 07:04:10 crc kubenswrapper[4889]: I1128 07:04:10.934131 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerName="util" Nov 28 07:04:10 crc kubenswrapper[4889]: E1128 07:04:10.934141 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerName="pull" Nov 28 07:04:10 crc kubenswrapper[4889]: I1128 07:04:10.934148 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerName="pull" Nov 28 07:04:10 crc kubenswrapper[4889]: I1128 07:04:10.934259 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db" containerName="extract" Nov 28 07:04:10 crc kubenswrapper[4889]: I1128 07:04:10.934792 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" Nov 28 07:04:10 crc kubenswrapper[4889]: I1128 07:04:10.936793 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-c8zvq" Nov 28 07:04:10 crc kubenswrapper[4889]: I1128 07:04:10.964337 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m"] Nov 28 07:04:11 crc kubenswrapper[4889]: I1128 07:04:11.004843 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbrd\" (UniqueName: \"kubernetes.io/projected/2effd529-dd6f-4763-9f9e-585e03124be7-kube-api-access-sdbrd\") pod \"openstack-operator-controller-operator-67d8f6cc56-8bp8m\" (UID: \"2effd529-dd6f-4763-9f9e-585e03124be7\") " pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" Nov 28 07:04:11 crc kubenswrapper[4889]: I1128 07:04:11.106553 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdbrd\" (UniqueName: \"kubernetes.io/projected/2effd529-dd6f-4763-9f9e-585e03124be7-kube-api-access-sdbrd\") pod \"openstack-operator-controller-operator-67d8f6cc56-8bp8m\" (UID: \"2effd529-dd6f-4763-9f9e-585e03124be7\") " pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" Nov 28 07:04:11 crc kubenswrapper[4889]: I1128 07:04:11.124477 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdbrd\" (UniqueName: \"kubernetes.io/projected/2effd529-dd6f-4763-9f9e-585e03124be7-kube-api-access-sdbrd\") pod \"openstack-operator-controller-operator-67d8f6cc56-8bp8m\" (UID: \"2effd529-dd6f-4763-9f9e-585e03124be7\") " pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" Nov 28 07:04:11 crc kubenswrapper[4889]: I1128 07:04:11.250467 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" Nov 28 07:04:11 crc kubenswrapper[4889]: I1128 07:04:11.660243 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m"] Nov 28 07:04:11 crc kubenswrapper[4889]: I1128 07:04:11.874309 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" event={"ID":"2effd529-dd6f-4763-9f9e-585e03124be7","Type":"ContainerStarted","Data":"8299e085072327529b06b5b3abc728e7ec47ed6d67f8cd215b3039c96de83006"} Nov 28 07:04:20 crc kubenswrapper[4889]: I1128 07:04:20.927861 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" event={"ID":"2effd529-dd6f-4763-9f9e-585e03124be7","Type":"ContainerStarted","Data":"35e315f8f291273d435f9e606b5928f34244cb7560cd008141bf0c519acd1deb"} Nov 28 07:04:20 crc kubenswrapper[4889]: I1128 07:04:20.928782 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" Nov 28 07:04:20 crc kubenswrapper[4889]: I1128 07:04:20.966012 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" podStartSLOduration=2.174252726 podStartE2EDuration="10.965994916s" podCreationTimestamp="2025-11-28 07:04:10 +0000 UTC" firstStartedPulling="2025-11-28 07:04:11.666402427 +0000 UTC m=+974.636636582" lastFinishedPulling="2025-11-28 07:04:20.458144617 +0000 UTC m=+983.428378772" observedRunningTime="2025-11-28 07:04:20.962095379 +0000 UTC m=+983.932329534" watchObservedRunningTime="2025-11-28 07:04:20.965994916 +0000 UTC m=+983.936229071" Nov 28 07:04:28 crc kubenswrapper[4889]: I1128 07:04:28.782622 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:04:28 crc kubenswrapper[4889]: I1128 07:04:28.783309 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:04:31 crc kubenswrapper[4889]: I1128 07:04:31.253577 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-8bp8m" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.203507 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.205154 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.209280 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fb45m" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.214216 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.215220 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.221855 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-hz84r" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.227871 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.232279 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-mxn8f"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.233342 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.236590 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9786f" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.247819 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd5lc\" (UniqueName: \"kubernetes.io/projected/0ee115df-19fd-4ca6-a087-9f4a56a86378-kube-api-access-fd5lc\") pod \"barbican-operator-controller-manager-7b64f4fb85-rwzxg\" (UID: \"0ee115df-19fd-4ca6-a087-9f4a56a86378\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.256062 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.265881 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-mxn8f"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.293370 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.306094 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.315099 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-89rr6" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.327928 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.345769 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.346798 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.348659 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgp5r\" (UniqueName: \"kubernetes.io/projected/dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a-kube-api-access-sgp5r\") pod \"glance-operator-controller-manager-589cbd6b5b-qcklj\" (UID: \"dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.348722 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7b6\" (UniqueName: \"kubernetes.io/projected/b9d9668a-02e9-4d9f-856e-be23f0484ccf-kube-api-access-jm7b6\") pod \"designate-operator-controller-manager-955677c94-mxn8f\" (UID: \"b9d9668a-02e9-4d9f-856e-be23f0484ccf\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.348832 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd5lc\" (UniqueName: \"kubernetes.io/projected/0ee115df-19fd-4ca6-a087-9f4a56a86378-kube-api-access-fd5lc\") pod \"barbican-operator-controller-manager-7b64f4fb85-rwzxg\" (UID: \"0ee115df-19fd-4ca6-a087-9f4a56a86378\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.348856 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtn4n\" (UniqueName: \"kubernetes.io/projected/363ed2cd-915f-4260-8eb7-950ff710b500-kube-api-access-xtn4n\") pod \"cinder-operator-controller-manager-6b7f75547b-kvt48\" (UID: \"363ed2cd-915f-4260-8eb7-950ff710b500\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.358543 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lkg25" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.381292 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.391796 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.392826 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.396474 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd5lc\" (UniqueName: \"kubernetes.io/projected/0ee115df-19fd-4ca6-a087-9f4a56a86378-kube-api-access-fd5lc\") pod \"barbican-operator-controller-manager-7b64f4fb85-rwzxg\" (UID: \"0ee115df-19fd-4ca6-a087-9f4a56a86378\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.396993 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-szhgr" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.428178 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.450271 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfv6\" (UniqueName: \"kubernetes.io/projected/d281dca0-e9e1-4e2d-befc-0508ae9421b9-kube-api-access-cjfv6\") pod \"heat-operator-controller-manager-5b77f656f-dr6z8\" (UID: \"d281dca0-e9e1-4e2d-befc-0508ae9421b9\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.450339 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtn4n\" (UniqueName: \"kubernetes.io/projected/363ed2cd-915f-4260-8eb7-950ff710b500-kube-api-access-xtn4n\") pod \"cinder-operator-controller-manager-6b7f75547b-kvt48\" (UID: \"363ed2cd-915f-4260-8eb7-950ff710b500\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.450375 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxf6\" (UniqueName: \"kubernetes.io/projected/178814bc-902e-43d9-a606-c3640477a94d-kube-api-access-8bxf6\") pod \"horizon-operator-controller-manager-5d494799bf-lbvbd\" (UID: \"178814bc-902e-43d9-a606-c3640477a94d\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.450401 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgp5r\" (UniqueName: \"kubernetes.io/projected/dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a-kube-api-access-sgp5r\") pod \"glance-operator-controller-manager-589cbd6b5b-qcklj\" (UID: \"dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.450429 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm7b6\" (UniqueName: \"kubernetes.io/projected/b9d9668a-02e9-4d9f-856e-be23f0484ccf-kube-api-access-jm7b6\") pod \"designate-operator-controller-manager-955677c94-mxn8f\" (UID: \"b9d9668a-02e9-4d9f-856e-be23f0484ccf\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.473480 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.489808 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.490613 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.490797 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.506267 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8w6dn" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.506521 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9fbr5" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.506672 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.533330 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm7b6\" (UniqueName: \"kubernetes.io/projected/b9d9668a-02e9-4d9f-856e-be23f0484ccf-kube-api-access-jm7b6\") pod \"designate-operator-controller-manager-955677c94-mxn8f\" (UID: \"b9d9668a-02e9-4d9f-856e-be23f0484ccf\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.535104 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.535829 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgp5r\" (UniqueName: \"kubernetes.io/projected/dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a-kube-api-access-sgp5r\") pod \"glance-operator-controller-manager-589cbd6b5b-qcklj\" (UID: \"dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.535891 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtn4n\" (UniqueName: \"kubernetes.io/projected/363ed2cd-915f-4260-8eb7-950ff710b500-kube-api-access-xtn4n\") pod \"cinder-operator-controller-manager-6b7f75547b-kvt48\" (UID: \"363ed2cd-915f-4260-8eb7-950ff710b500\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.535893 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.536821 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.537216 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.555492 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-w2p2w" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.557489 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86jj\" (UniqueName: \"kubernetes.io/projected/dcd06fe4-e876-4947-b5c6-812381c42b71-kube-api-access-x86jj\") pod \"ironic-operator-controller-manager-67cb4dc6d4-k5wp6\" (UID: \"dcd06fe4-e876-4947-b5c6-812381c42b71\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.557531 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sscwx\" (UniqueName: \"kubernetes.io/projected/1fec2494-e72e-4019-a869-b3080018f75d-kube-api-access-sscwx\") pod \"keystone-operator-controller-manager-7b4567c7cf-mlwrn\" (UID: \"1fec2494-e72e-4019-a869-b3080018f75d\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.557574 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxf6\" (UniqueName: \"kubernetes.io/projected/178814bc-902e-43d9-a606-c3640477a94d-kube-api-access-8bxf6\") pod \"horizon-operator-controller-manager-5d494799bf-lbvbd\" (UID: \"178814bc-902e-43d9-a606-c3640477a94d\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.557613 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.557657 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfv6\" (UniqueName: \"kubernetes.io/projected/d281dca0-e9e1-4e2d-befc-0508ae9421b9-kube-api-access-cjfv6\") pod \"heat-operator-controller-manager-5b77f656f-dr6z8\" (UID: \"d281dca0-e9e1-4e2d-befc-0508ae9421b9\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.557682 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bx8g\" (UniqueName: \"kubernetes.io/projected/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-kube-api-access-7bx8g\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.571559 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.601171 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.619400 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfv6\" (UniqueName: \"kubernetes.io/projected/d281dca0-e9e1-4e2d-befc-0508ae9421b9-kube-api-access-cjfv6\") pod \"heat-operator-controller-manager-5b77f656f-dr6z8\" (UID: \"d281dca0-e9e1-4e2d-befc-0508ae9421b9\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.622459 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxf6\" (UniqueName: \"kubernetes.io/projected/178814bc-902e-43d9-a606-c3640477a94d-kube-api-access-8bxf6\") pod \"horizon-operator-controller-manager-5d494799bf-lbvbd\" (UID: \"178814bc-902e-43d9-a606-c3640477a94d\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.633354 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.644663 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.663377 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bx8g\" (UniqueName: \"kubernetes.io/projected/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-kube-api-access-7bx8g\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.663416 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86jj\" (UniqueName: \"kubernetes.io/projected/dcd06fe4-e876-4947-b5c6-812381c42b71-kube-api-access-x86jj\") pod \"ironic-operator-controller-manager-67cb4dc6d4-k5wp6\" (UID: \"dcd06fe4-e876-4947-b5c6-812381c42b71\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.663445 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sscwx\" (UniqueName: \"kubernetes.io/projected/1fec2494-e72e-4019-a869-b3080018f75d-kube-api-access-sscwx\") pod \"keystone-operator-controller-manager-7b4567c7cf-mlwrn\" (UID: \"1fec2494-e72e-4019-a869-b3080018f75d\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.663499 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:50 crc kubenswrapper[4889]: E1128 07:04:50.663612 4889 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:50 crc kubenswrapper[4889]: E1128 07:04:50.663654 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert podName:559d7ec2-8cd6-4c5c-a844-c7f3953ec021 nodeName:}" failed. No retries permitted until 2025-11-28 07:04:51.163638467 +0000 UTC m=+1014.133872622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert") pod "infra-operator-controller-manager-57548d458d-lwdbd" (UID: "559d7ec2-8cd6-4c5c-a844-c7f3953ec021") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.668761 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.691895 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.738524 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.738809 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sscwx\" (UniqueName: \"kubernetes.io/projected/1fec2494-e72e-4019-a869-b3080018f75d-kube-api-access-sscwx\") pod \"keystone-operator-controller-manager-7b4567c7cf-mlwrn\" (UID: \"1fec2494-e72e-4019-a869-b3080018f75d\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.739769 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.743895 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bx8g\" (UniqueName: \"kubernetes.io/projected/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-kube-api-access-7bx8g\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.747322 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4ls5f" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.747525 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86jj\" (UniqueName: \"kubernetes.io/projected/dcd06fe4-e876-4947-b5c6-812381c42b71-kube-api-access-x86jj\") pod \"ironic-operator-controller-manager-67cb4dc6d4-k5wp6\" (UID: \"dcd06fe4-e876-4947-b5c6-812381c42b71\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.770103 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.770209 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvl5\" (UniqueName: \"kubernetes.io/projected/8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d-kube-api-access-psvl5\") pod \"manila-operator-controller-manager-5d499bf58b-svz4w\" (UID: \"8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.770715 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.781614 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.831746 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.855120 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.857815 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-swwvs" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.867145 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.871698 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvl5\" (UniqueName: \"kubernetes.io/projected/8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d-kube-api-access-psvl5\") pod \"manila-operator-controller-manager-5d499bf58b-svz4w\" (UID: \"8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.880833 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.881866 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.910142 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-drrlg" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.940558 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvl5\" (UniqueName: \"kubernetes.io/projected/8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d-kube-api-access-psvl5\") pod \"manila-operator-controller-manager-5d499bf58b-svz4w\" (UID: \"8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.965655 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s"] Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.983260 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.973012 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snhzq\" (UniqueName: \"kubernetes.io/projected/50833719-605c-4e59-9535-7377eeb99994-kube-api-access-snhzq\") pod \"neutron-operator-controller-manager-6fdcddb789-hds8d\" (UID: \"50833719-605c-4e59-9535-7377eeb99994\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.988089 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvxrl\" (UniqueName: \"kubernetes.io/projected/670339cb-0ec6-48bc-b892-c14ad66849c0-kube-api-access-jvxrl\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-g6lns\" (UID: \"670339cb-0ec6-48bc-b892-c14ad66849c0\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" Nov 28 07:04:50 crc kubenswrapper[4889]: I1128 07:04:50.993201 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.006470 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-pmcxl" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.029701 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.032166 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.037348 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.047297 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-86pd9" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.076184 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.079218 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.095042 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvxrl\" (UniqueName: \"kubernetes.io/projected/670339cb-0ec6-48bc-b892-c14ad66849c0-kube-api-access-jvxrl\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-g6lns\" (UID: \"670339cb-0ec6-48bc-b892-c14ad66849c0\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.095152 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbmp\" (UniqueName: \"kubernetes.io/projected/e81f775c-9ce2-415f-8bd3-ed49458ae893-kube-api-access-5zbmp\") pod \"octavia-operator-controller-manager-64cdc6ff96-hmpqd\" (UID: \"e81f775c-9ce2-415f-8bd3-ed49458ae893\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.095173 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mlfp\" (UniqueName: \"kubernetes.io/projected/1a7dd634-e6b8-435c-963b-e482cc1d0cac-kube-api-access-7mlfp\") pod \"nova-operator-controller-manager-79556f57fc-hd79s\" (UID: \"1a7dd634-e6b8-435c-963b-e482cc1d0cac\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.095226 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snhzq\" (UniqueName: \"kubernetes.io/projected/50833719-605c-4e59-9535-7377eeb99994-kube-api-access-snhzq\") pod \"neutron-operator-controller-manager-6fdcddb789-hds8d\" (UID: \"50833719-605c-4e59-9535-7377eeb99994\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.097461 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.099221 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.104930 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9dnsx" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.105065 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.109129 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.112680 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.113484 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.114737 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.118594 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6kmqh" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.123838 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.130620 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.132029 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.136666 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snhzq\" (UniqueName: \"kubernetes.io/projected/50833719-605c-4e59-9535-7377eeb99994-kube-api-access-snhzq\") pod \"neutron-operator-controller-manager-6fdcddb789-hds8d\" (UID: \"50833719-605c-4e59-9535-7377eeb99994\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.140979 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-645lz" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.141217 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvxrl\" (UniqueName: \"kubernetes.io/projected/670339cb-0ec6-48bc-b892-c14ad66849c0-kube-api-access-jvxrl\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-g6lns\" (UID: \"670339cb-0ec6-48bc-b892-c14ad66849c0\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.159874 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.196409 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kwb\" (UniqueName: \"kubernetes.io/projected/af524ba5-acaf-4f33-bb04-6c2818b1cdf5-kube-api-access-v4kwb\") pod \"placement-operator-controller-manager-57988cc5b5-dp5mk\" (UID: \"af524ba5-acaf-4f33-bb04-6c2818b1cdf5\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.196485 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvltn\" (UniqueName: \"kubernetes.io/projected/22d7b246-073e-4b87-81f8-04cb344e317c-kube-api-access-qvltn\") pod \"ovn-operator-controller-manager-56897c768d-pzwp7\" (UID: \"22d7b246-073e-4b87-81f8-04cb344e317c\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.196527 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.196546 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbmp\" (UniqueName: \"kubernetes.io/projected/e81f775c-9ce2-415f-8bd3-ed49458ae893-kube-api-access-5zbmp\") pod \"octavia-operator-controller-manager-64cdc6ff96-hmpqd\" (UID: \"e81f775c-9ce2-415f-8bd3-ed49458ae893\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.196566 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mlfp\" (UniqueName: \"kubernetes.io/projected/1a7dd634-e6b8-435c-963b-e482cc1d0cac-kube-api-access-7mlfp\") pod \"nova-operator-controller-manager-79556f57fc-hd79s\" (UID: \"1a7dd634-e6b8-435c-963b-e482cc1d0cac\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.196604 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv88b\" (UniqueName: \"kubernetes.io/projected/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-kube-api-access-rv88b\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.196631 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.197127 4889 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.197178 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert podName:559d7ec2-8cd6-4c5c-a844-c7f3953ec021 nodeName:}" failed. No retries permitted until 2025-11-28 07:04:52.197162333 +0000 UTC m=+1015.167396488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert") pod "infra-operator-controller-manager-57548d458d-lwdbd" (UID: "559d7ec2-8cd6-4c5c-a844-c7f3953ec021") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.208878 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.210775 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.215976 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hrgf9" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.219897 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.221056 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.224780 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2wfk4" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.225735 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.232995 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.233774 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbmp\" (UniqueName: \"kubernetes.io/projected/e81f775c-9ce2-415f-8bd3-ed49458ae893-kube-api-access-5zbmp\") pod \"octavia-operator-controller-manager-64cdc6ff96-hmpqd\" (UID: \"e81f775c-9ce2-415f-8bd3-ed49458ae893\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.240401 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.241463 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.241972 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mlfp\" (UniqueName: \"kubernetes.io/projected/1a7dd634-e6b8-435c-963b-e482cc1d0cac-kube-api-access-7mlfp\") pod \"nova-operator-controller-manager-79556f57fc-hd79s\" (UID: \"1a7dd634-e6b8-435c-963b-e482cc1d0cac\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.244317 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vn5qp" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.265698 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.264533 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.266863 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.271290 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.276806 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-m22zs" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.278017 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.298239 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv88b\" (UniqueName: \"kubernetes.io/projected/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-kube-api-access-rv88b\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.298500 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-762kr\" (UniqueName: \"kubernetes.io/projected/f02b81e8-ad8a-445e-8ebd-156f05fdd9e7-kube-api-access-762kr\") pod \"test-operator-controller-manager-5cd6c7f4c8-tcbth\" (UID: \"f02b81e8-ad8a-445e-8ebd-156f05fdd9e7\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.298577 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.298660 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2qv\" (UniqueName: \"kubernetes.io/projected/47734956-e3b4-4ca1-8f4b-490b2f861bf0-kube-api-access-vb2qv\") pod \"swift-operator-controller-manager-d77b94747-j9vh5\" (UID: \"47734956-e3b4-4ca1-8f4b-490b2f861bf0\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.298752 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gcxp\" (UniqueName: \"kubernetes.io/projected/729b51c8-1b36-4716-8c7a-ae23ed249f03-kube-api-access-6gcxp\") pod \"watcher-operator-controller-manager-656dcb59d4-9swrr\" (UID: \"729b51c8-1b36-4716-8c7a-ae23ed249f03\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.298799 4889 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.298929 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert podName:a9f38b46-2bae-4e2d-8b02-c314b9e8f77a nodeName:}" failed. No retries permitted until 2025-11-28 07:04:51.798909867 +0000 UTC m=+1014.769144012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" (UID: "a9f38b46-2bae-4e2d-8b02-c314b9e8f77a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.298863 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4kwb\" (UniqueName: \"kubernetes.io/projected/af524ba5-acaf-4f33-bb04-6c2818b1cdf5-kube-api-access-v4kwb\") pod \"placement-operator-controller-manager-57988cc5b5-dp5mk\" (UID: \"af524ba5-acaf-4f33-bb04-6c2818b1cdf5\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.299059 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjb6z\" (UniqueName: \"kubernetes.io/projected/b531db0a-6f24-4a61-811a-d75de0f59e94-kube-api-access-bjb6z\") pod \"telemetry-operator-controller-manager-76cc84c6bb-kp8mv\" (UID: \"b531db0a-6f24-4a61-811a-d75de0f59e94\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.299106 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvltn\" (UniqueName: \"kubernetes.io/projected/22d7b246-073e-4b87-81f8-04cb344e317c-kube-api-access-qvltn\") pod \"ovn-operator-controller-manager-56897c768d-pzwp7\" (UID: \"22d7b246-073e-4b87-81f8-04cb344e317c\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.311595 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.312534 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.315205 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.315409 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wnsfp" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.315522 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.316857 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.318585 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4kwb\" (UniqueName: \"kubernetes.io/projected/af524ba5-acaf-4f33-bb04-6c2818b1cdf5-kube-api-access-v4kwb\") pod \"placement-operator-controller-manager-57988cc5b5-dp5mk\" (UID: \"af524ba5-acaf-4f33-bb04-6c2818b1cdf5\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.320870 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.323170 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvltn\" (UniqueName: \"kubernetes.io/projected/22d7b246-073e-4b87-81f8-04cb344e317c-kube-api-access-qvltn\") pod \"ovn-operator-controller-manager-56897c768d-pzwp7\" (UID: \"22d7b246-073e-4b87-81f8-04cb344e317c\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.323583 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv88b\" (UniqueName: \"kubernetes.io/projected/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-kube-api-access-rv88b\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.330566 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.332327 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.333830 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.334877 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-khpgh" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.368377 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.388392 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.400768 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtl96\" (UniqueName: \"kubernetes.io/projected/37909eac-261b-42f4-b85e-14fd8f00c42b-kube-api-access-mtl96\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6dwz7\" (UID: \"37909eac-261b-42f4-b85e-14fd8f00c42b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.400838 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjb6z\" (UniqueName: \"kubernetes.io/projected/b531db0a-6f24-4a61-811a-d75de0f59e94-kube-api-access-bjb6z\") pod \"telemetry-operator-controller-manager-76cc84c6bb-kp8mv\" (UID: \"b531db0a-6f24-4a61-811a-d75de0f59e94\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.400902 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkghc\" (UniqueName: \"kubernetes.io/projected/efc28083-2792-41ee-a835-5953afb3070d-kube-api-access-pkghc\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.401021 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.401064 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-762kr\" (UniqueName: \"kubernetes.io/projected/f02b81e8-ad8a-445e-8ebd-156f05fdd9e7-kube-api-access-762kr\") pod \"test-operator-controller-manager-5cd6c7f4c8-tcbth\" (UID: \"f02b81e8-ad8a-445e-8ebd-156f05fdd9e7\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.401092 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb2qv\" (UniqueName: \"kubernetes.io/projected/47734956-e3b4-4ca1-8f4b-490b2f861bf0-kube-api-access-vb2qv\") pod \"swift-operator-controller-manager-d77b94747-j9vh5\" (UID: \"47734956-e3b4-4ca1-8f4b-490b2f861bf0\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.401109 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gcxp\" (UniqueName: \"kubernetes.io/projected/729b51c8-1b36-4716-8c7a-ae23ed249f03-kube-api-access-6gcxp\") pod \"watcher-operator-controller-manager-656dcb59d4-9swrr\" (UID: \"729b51c8-1b36-4716-8c7a-ae23ed249f03\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.401151 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.418715 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-762kr\" (UniqueName: \"kubernetes.io/projected/f02b81e8-ad8a-445e-8ebd-156f05fdd9e7-kube-api-access-762kr\") pod \"test-operator-controller-manager-5cd6c7f4c8-tcbth\" (UID: \"f02b81e8-ad8a-445e-8ebd-156f05fdd9e7\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.420000 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb2qv\" (UniqueName: \"kubernetes.io/projected/47734956-e3b4-4ca1-8f4b-490b2f861bf0-kube-api-access-vb2qv\") pod \"swift-operator-controller-manager-d77b94747-j9vh5\" (UID: \"47734956-e3b4-4ca1-8f4b-490b2f861bf0\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.421174 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gcxp\" (UniqueName: \"kubernetes.io/projected/729b51c8-1b36-4716-8c7a-ae23ed249f03-kube-api-access-6gcxp\") pod \"watcher-operator-controller-manager-656dcb59d4-9swrr\" (UID: \"729b51c8-1b36-4716-8c7a-ae23ed249f03\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.421999 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjb6z\" (UniqueName: \"kubernetes.io/projected/b531db0a-6f24-4a61-811a-d75de0f59e94-kube-api-access-bjb6z\") pod \"telemetry-operator-controller-manager-76cc84c6bb-kp8mv\" (UID: \"b531db0a-6f24-4a61-811a-d75de0f59e94\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.491521 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.502511 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtl96\" (UniqueName: \"kubernetes.io/projected/37909eac-261b-42f4-b85e-14fd8f00c42b-kube-api-access-mtl96\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6dwz7\" (UID: \"37909eac-261b-42f4-b85e-14fd8f00c42b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.502582 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkghc\" (UniqueName: \"kubernetes.io/projected/efc28083-2792-41ee-a835-5953afb3070d-kube-api-access-pkghc\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.502653 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.502719 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.502869 4889 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.502926 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:04:52.002907408 +0000 UTC m=+1014.973141563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "metrics-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.503501 4889 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.503536 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:04:52.003527143 +0000 UTC m=+1014.973761308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "webhook-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.528057 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.529092 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtl96\" (UniqueName: \"kubernetes.io/projected/37909eac-261b-42f4-b85e-14fd8f00c42b-kube-api-access-mtl96\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6dwz7\" (UID: \"37909eac-261b-42f4-b85e-14fd8f00c42b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.529492 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkghc\" (UniqueName: \"kubernetes.io/projected/efc28083-2792-41ee-a835-5953afb3070d-kube-api-access-pkghc\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.561005 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.644340 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.663058 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.673379 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.675518 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg"] Nov 28 07:04:51 crc kubenswrapper[4889]: W1128 07:04:51.701743 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ee115df_19fd_4ca6_a087_9f4a56a86378.slice/crio-3c5d2dd689681dfae725bb13b49ba2e8219491247f86e4bc9ac9130f65290882 WatchSource:0}: Error finding container 3c5d2dd689681dfae725bb13b49ba2e8219491247f86e4bc9ac9130f65290882: Status 404 returned error can't find the container with id 3c5d2dd689681dfae725bb13b49ba2e8219491247f86e4bc9ac9130f65290882 Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.715405 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7" Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.808578 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.808774 4889 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: E1128 07:04:51.808821 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert podName:a9f38b46-2bae-4e2d-8b02-c314b9e8f77a nodeName:}" failed. No retries permitted until 2025-11-28 07:04:52.808807857 +0000 UTC m=+1015.779042012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" (UID: "a9f38b46-2bae-4e2d-8b02-c314b9e8f77a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.883698 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.901941 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.909679 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-mxn8f"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.915603 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8"] Nov 28 07:04:51 crc kubenswrapper[4889]: I1128 07:04:51.923600 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd"] Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.010328 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.010392 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.010558 4889 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.010605 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:04:53.010591693 +0000 UTC m=+1015.980825848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "metrics-server-cert" not found Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.010797 4889 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.010880 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:04:53.010856999 +0000 UTC m=+1015.981091154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "webhook-server-cert" not found Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.062615 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj"] Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.101487 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d"] Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.108066 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6"] Nov 28 07:04:52 crc kubenswrapper[4889]: W1128 07:04:52.112044 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50833719_605c_4e59_9535_7377eeb99994.slice/crio-67af1d772f5c5521a63cad228f68055f846db5e080393648eb51885e23b51ad9 WatchSource:0}: Error finding container 67af1d772f5c5521a63cad228f68055f846db5e080393648eb51885e23b51ad9: Status 404 returned error can't find the container with id 67af1d772f5c5521a63cad228f68055f846db5e080393648eb51885e23b51ad9 Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.115050 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" event={"ID":"d281dca0-e9e1-4e2d-befc-0508ae9421b9","Type":"ContainerStarted","Data":"84fa2ac3205eccebd1d680815eb4ae5c482d3cb3f500c7eb3a3712d8c2243ee9"} Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.116081 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" event={"ID":"0ee115df-19fd-4ca6-a087-9f4a56a86378","Type":"ContainerStarted","Data":"3c5d2dd689681dfae725bb13b49ba2e8219491247f86e4bc9ac9130f65290882"} Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.119529 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" event={"ID":"178814bc-902e-43d9-a606-c3640477a94d","Type":"ContainerStarted","Data":"43d4386b54226bb4537b7e7fffd62c29b3546967c9350d6f187cd1587a962eef"} Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.121519 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" event={"ID":"1fec2494-e72e-4019-a869-b3080018f75d","Type":"ContainerStarted","Data":"a8ce1a71b45598810093e96faa6225b5e026d0fbf4b9fed751121e1d1dd861d3"} Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.124140 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" event={"ID":"b9d9668a-02e9-4d9f-856e-be23f0484ccf","Type":"ContainerStarted","Data":"f45f3c8149a06ce8c05f21d97f4d5da6be01da54b2abf94c5c2fbd746186250f"} Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.125206 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" event={"ID":"363ed2cd-915f-4260-8eb7-950ff710b500","Type":"ContainerStarted","Data":"1b3704eb3084deed71d84e4c928d6af101ccde599eaa06e39a0e9a8964fb1555"} Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.125937 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" event={"ID":"dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a","Type":"ContainerStarted","Data":"e55929d856b4a53dc4df385bb87f04b50efe763708a4d919a24f7a5c15011a1f"} Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.142212 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w"] Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.147662 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns"] Nov 28 07:04:52 crc kubenswrapper[4889]: W1128 07:04:52.155114 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b08c9da_c161_4ca7_a50a_f70b7ee7ce7d.slice/crio-015e788ef2146baf4d22a8b95e0e76b50689d4bc96f8a7cc8da4b1a709bbb289 WatchSource:0}: Error finding container 015e788ef2146baf4d22a8b95e0e76b50689d4bc96f8a7cc8da4b1a709bbb289: Status 404 returned error can't find the container with id 015e788ef2146baf4d22a8b95e0e76b50689d4bc96f8a7cc8da4b1a709bbb289 Nov 28 07:04:52 crc kubenswrapper[4889]: W1128 07:04:52.156277 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670339cb_0ec6_48bc_b892_c14ad66849c0.slice/crio-e16a17cc4ab8c23404c0d01bc754b5b7c3028546239420ab9f2a45fa6cc079d0 WatchSource:0}: Error finding container e16a17cc4ab8c23404c0d01bc754b5b7c3028546239420ab9f2a45fa6cc079d0: Status 404 returned error can't find the container with id e16a17cc4ab8c23404c0d01bc754b5b7c3028546239420ab9f2a45fa6cc079d0 Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.203237 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk"] Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.214479 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.215033 4889 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.215089 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert podName:559d7ec2-8cd6-4c5c-a844-c7f3953ec021 nodeName:}" failed. No retries permitted until 2025-11-28 07:04:54.215071416 +0000 UTC m=+1017.185305571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert") pod "infra-operator-controller-manager-57548d458d-lwdbd" (UID: "559d7ec2-8cd6-4c5c-a844-c7f3953ec021") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.218015 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7"] Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.226344 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvltn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-pzwp7_openstack-operators(22d7b246-073e-4b87-81f8-04cb344e317c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.232032 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvltn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-pzwp7_openstack-operators(22d7b246-073e-4b87-81f8-04cb344e317c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.233430 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" podUID="22d7b246-073e-4b87-81f8-04cb344e317c" Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.235428 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5"] Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.249721 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vb2qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-j9vh5_openstack-operators(47734956-e3b4-4ca1-8f4b-490b2f861bf0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.251402 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vb2qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-j9vh5_openstack-operators(47734956-e3b4-4ca1-8f4b-490b2f861bf0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.251430 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s"] Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.253434 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" podUID="47734956-e3b4-4ca1-8f4b-490b2f861bf0" Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.256193 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd"] Nov 28 07:04:52 crc kubenswrapper[4889]: W1128 07:04:52.256632 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode81f775c_9ce2_415f_8bd3_ed49458ae893.slice/crio-50527661e7967cd716069a1a17c27b92c2842ddae6a65b5d7158c8b5401f97a5 WatchSource:0}: Error finding container 50527661e7967cd716069a1a17c27b92c2842ddae6a65b5d7158c8b5401f97a5: Status 404 returned error can't find the container with id 50527661e7967cd716069a1a17c27b92c2842ddae6a65b5d7158c8b5401f97a5 Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.259599 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zbmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-hmpqd_openstack-operators(e81f775c-9ce2-415f-8bd3-ed49458ae893): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: W1128 07:04:52.259680 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a7dd634_e6b8_435c_963b_e482cc1d0cac.slice/crio-2cc2d84d664f0453d96f993375b7d659d53d28a2848ca311c5440590bac93ee3 WatchSource:0}: Error finding container 2cc2d84d664f0453d96f993375b7d659d53d28a2848ca311c5440590bac93ee3: Status 404 returned error can't find the container with id 2cc2d84d664f0453d96f993375b7d659d53d28a2848ca311c5440590bac93ee3 Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.261437 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zbmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-hmpqd_openstack-operators(e81f775c-9ce2-415f-8bd3-ed49458ae893): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.262531 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" podUID="e81f775c-9ce2-415f-8bd3-ed49458ae893" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.264161 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mlfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-hd79s_openstack-operators(1a7dd634-e6b8-435c-963b-e482cc1d0cac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.265904 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mlfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-hd79s_openstack-operators(1a7dd634-e6b8-435c-963b-e482cc1d0cac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.267188 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" podUID="1a7dd634-e6b8-435c-963b-e482cc1d0cac" Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.463114 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7"] Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.471925 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth"] Nov 28 07:04:52 crc kubenswrapper[4889]: W1128 07:04:52.479072 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf02b81e8_ad8a_445e_8ebd_156f05fdd9e7.slice/crio-edc2f5af10f2f089491c23f96b735cff0d3b4126b103d0a88a00b696bcd98c64 WatchSource:0}: Error finding container edc2f5af10f2f089491c23f96b735cff0d3b4126b103d0a88a00b696bcd98c64: Status 404 returned error can't find the container with id edc2f5af10f2f089491c23f96b735cff0d3b4126b103d0a88a00b696bcd98c64 Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.488040 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-762kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-tcbth_openstack-operators(f02b81e8-ad8a-445e-8ebd-156f05fdd9e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.490201 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-762kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-tcbth_openstack-operators(f02b81e8-ad8a-445e-8ebd-156f05fdd9e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.491365 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" podUID="f02b81e8-ad8a-445e-8ebd-156f05fdd9e7" Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.508235 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr"] Nov 28 07:04:52 crc kubenswrapper[4889]: W1128 07:04:52.513840 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb531db0a_6f24_4a61_811a_d75de0f59e94.slice/crio-4ca96c5745c455072f99deeb00a2965d9739e43f6d80c45d8be014898835f45e WatchSource:0}: Error finding container 4ca96c5745c455072f99deeb00a2965d9739e43f6d80c45d8be014898835f45e: Status 404 returned error can't find the container with id 4ca96c5745c455072f99deeb00a2965d9739e43f6d80c45d8be014898835f45e Nov 28 07:04:52 crc kubenswrapper[4889]: W1128 07:04:52.514972 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod729b51c8_1b36_4716_8c7a_ae23ed249f03.slice/crio-fd0ab9a0d044f06935d1ecb9be4de0abed2f0bb31682611cbd82beb335583f1d WatchSource:0}: Error finding container fd0ab9a0d044f06935d1ecb9be4de0abed2f0bb31682611cbd82beb335583f1d: Status 404 returned error can't find the container with id fd0ab9a0d044f06935d1ecb9be4de0abed2f0bb31682611cbd82beb335583f1d Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.517828 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6gcxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-9swrr_openstack-operators(729b51c8-1b36-4716-8c7a-ae23ed249f03): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.519350 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv"] Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.521598 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6gcxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-9swrr_openstack-operators(729b51c8-1b36-4716-8c7a-ae23ed249f03): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.522228 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjb6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-kp8mv_openstack-operators(b531db0a-6f24-4a61-811a-d75de0f59e94): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.522847 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" podUID="729b51c8-1b36-4716-8c7a-ae23ed249f03" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.524134 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjb6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-kp8mv_openstack-operators(b531db0a-6f24-4a61-811a-d75de0f59e94): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.525415 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" podUID="b531db0a-6f24-4a61-811a-d75de0f59e94" Nov 28 07:04:52 crc kubenswrapper[4889]: I1128 07:04:52.822785 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.822961 4889 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:52 crc kubenswrapper[4889]: E1128 07:04:52.823060 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert podName:a9f38b46-2bae-4e2d-8b02-c314b9e8f77a nodeName:}" failed. No retries permitted until 2025-11-28 07:04:54.823039459 +0000 UTC m=+1017.793273624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" (UID: "a9f38b46-2bae-4e2d-8b02-c314b9e8f77a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.024866 4889 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.024930 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:04:55.024916457 +0000 UTC m=+1017.995150612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "webhook-server-cert" not found Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.024689 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.024985 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.025146 4889 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.025188 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:04:55.025178003 +0000 UTC m=+1017.995412148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "metrics-server-cert" not found Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.134321 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" event={"ID":"670339cb-0ec6-48bc-b892-c14ad66849c0","Type":"ContainerStarted","Data":"e16a17cc4ab8c23404c0d01bc754b5b7c3028546239420ab9f2a45fa6cc079d0"} Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.135368 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" event={"ID":"22d7b246-073e-4b87-81f8-04cb344e317c","Type":"ContainerStarted","Data":"eda46ac6e8c3c0b6507d0f86e34f434d83e5af7457556cd15a5133b390c79801"} Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.136909 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" event={"ID":"729b51c8-1b36-4716-8c7a-ae23ed249f03","Type":"ContainerStarted","Data":"fd0ab9a0d044f06935d1ecb9be4de0abed2f0bb31682611cbd82beb335583f1d"} Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.137557 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" podUID="22d7b246-073e-4b87-81f8-04cb344e317c" Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.138959 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7" event={"ID":"37909eac-261b-42f4-b85e-14fd8f00c42b","Type":"ContainerStarted","Data":"b78fa5ad62bad5cd20cec02dbb829cc70d252f0423e874faf6e30c77164e54f1"} Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.141361 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" event={"ID":"50833719-605c-4e59-9535-7377eeb99994","Type":"ContainerStarted","Data":"67af1d772f5c5521a63cad228f68055f846db5e080393648eb51885e23b51ad9"} Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.143678 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" event={"ID":"f02b81e8-ad8a-445e-8ebd-156f05fdd9e7","Type":"ContainerStarted","Data":"edc2f5af10f2f089491c23f96b735cff0d3b4126b103d0a88a00b696bcd98c64"} Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.146864 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" event={"ID":"47734956-e3b4-4ca1-8f4b-490b2f861bf0","Type":"ContainerStarted","Data":"14ee19f0377f3c476a52bca739d6785b910c68d5755626f204fde098cb8a226f"} Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.146914 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" podUID="729b51c8-1b36-4716-8c7a-ae23ed249f03" Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.149350 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" event={"ID":"dcd06fe4-e876-4947-b5c6-812381c42b71","Type":"ContainerStarted","Data":"af5510633d1388532c95a138f2139d9137d6296d43931dba2f1b6782a18bdf67"} Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.149516 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" podUID="f02b81e8-ad8a-445e-8ebd-156f05fdd9e7" Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.161693 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" podUID="47734956-e3b4-4ca1-8f4b-490b2f861bf0" Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.170444 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" event={"ID":"b531db0a-6f24-4a61-811a-d75de0f59e94","Type":"ContainerStarted","Data":"4ca96c5745c455072f99deeb00a2965d9739e43f6d80c45d8be014898835f45e"} Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.173806 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" podUID="b531db0a-6f24-4a61-811a-d75de0f59e94" Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.173920 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" event={"ID":"1a7dd634-e6b8-435c-963b-e482cc1d0cac","Type":"ContainerStarted","Data":"2cc2d84d664f0453d96f993375b7d659d53d28a2848ca311c5440590bac93ee3"} Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.175891 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" podUID="1a7dd634-e6b8-435c-963b-e482cc1d0cac" Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.177623 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" event={"ID":"8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d","Type":"ContainerStarted","Data":"015e788ef2146baf4d22a8b95e0e76b50689d4bc96f8a7cc8da4b1a709bbb289"} Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.184999 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" event={"ID":"af524ba5-acaf-4f33-bb04-6c2818b1cdf5","Type":"ContainerStarted","Data":"c6c599bb56162b3585cc12e93a040eee92a5824c3c33b29674ef87869135f5a7"} Nov 28 07:04:53 crc kubenswrapper[4889]: I1128 07:04:53.194445 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" event={"ID":"e81f775c-9ce2-415f-8bd3-ed49458ae893","Type":"ContainerStarted","Data":"50527661e7967cd716069a1a17c27b92c2842ddae6a65b5d7158c8b5401f97a5"} Nov 28 07:04:53 crc kubenswrapper[4889]: E1128 07:04:53.210196 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" podUID="e81f775c-9ce2-415f-8bd3-ed49458ae893" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.206537 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" podUID="b531db0a-6f24-4a61-811a-d75de0f59e94" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.206789 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" podUID="47734956-e3b4-4ca1-8f4b-490b2f861bf0" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.206804 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" podUID="729b51c8-1b36-4716-8c7a-ae23ed249f03" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.206958 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" podUID="1a7dd634-e6b8-435c-963b-e482cc1d0cac" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.207106 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" podUID="22d7b246-073e-4b87-81f8-04cb344e317c" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.207188 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" podUID="f02b81e8-ad8a-445e-8ebd-156f05fdd9e7" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.211049 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" podUID="e81f775c-9ce2-415f-8bd3-ed49458ae893" Nov 28 07:04:54 crc kubenswrapper[4889]: I1128 07:04:54.261593 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.261842 4889 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.262052 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert podName:559d7ec2-8cd6-4c5c-a844-c7f3953ec021 nodeName:}" failed. No retries permitted until 2025-11-28 07:04:58.261915334 +0000 UTC m=+1021.232149479 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert") pod "infra-operator-controller-manager-57548d458d-lwdbd" (UID: "559d7ec2-8cd6-4c5c-a844-c7f3953ec021") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:54 crc kubenswrapper[4889]: I1128 07:04:54.870554 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.871146 4889 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:54 crc kubenswrapper[4889]: E1128 07:04:54.871241 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert podName:a9f38b46-2bae-4e2d-8b02-c314b9e8f77a nodeName:}" failed. No retries permitted until 2025-11-28 07:04:58.87121791 +0000 UTC m=+1021.841452065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" (UID: "a9f38b46-2bae-4e2d-8b02-c314b9e8f77a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:55 crc kubenswrapper[4889]: I1128 07:04:55.072921 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:55 crc kubenswrapper[4889]: I1128 07:04:55.073005 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:55 crc kubenswrapper[4889]: E1128 07:04:55.073098 4889 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:04:55 crc kubenswrapper[4889]: E1128 07:04:55.073174 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:04:59.07315662 +0000 UTC m=+1022.043390775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "webhook-server-cert" not found Nov 28 07:04:55 crc kubenswrapper[4889]: E1128 07:04:55.073218 4889 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:04:55 crc kubenswrapper[4889]: E1128 07:04:55.073287 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:04:59.073268423 +0000 UTC m=+1022.043502678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "metrics-server-cert" not found Nov 28 07:04:58 crc kubenswrapper[4889]: I1128 07:04:58.319865 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:04:58 crc kubenswrapper[4889]: E1128 07:04:58.319987 4889 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:58 crc kubenswrapper[4889]: E1128 07:04:58.320359 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert podName:559d7ec2-8cd6-4c5c-a844-c7f3953ec021 nodeName:}" failed. No retries permitted until 2025-11-28 07:05:06.320339727 +0000 UTC m=+1029.290573882 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert") pod "infra-operator-controller-manager-57548d458d-lwdbd" (UID: "559d7ec2-8cd6-4c5c-a844-c7f3953ec021") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:04:58 crc kubenswrapper[4889]: I1128 07:04:58.782445 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:04:58 crc kubenswrapper[4889]: I1128 07:04:58.782503 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:04:58 crc kubenswrapper[4889]: I1128 07:04:58.928504 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:04:58 crc kubenswrapper[4889]: E1128 07:04:58.928691 4889 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:58 crc kubenswrapper[4889]: E1128 07:04:58.928788 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert podName:a9f38b46-2bae-4e2d-8b02-c314b9e8f77a nodeName:}" failed. No retries permitted until 2025-11-28 07:05:06.928768072 +0000 UTC m=+1029.899002227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" (UID: "a9f38b46-2bae-4e2d-8b02-c314b9e8f77a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:04:59 crc kubenswrapper[4889]: I1128 07:04:59.131368 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:59 crc kubenswrapper[4889]: I1128 07:04:59.131516 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:04:59 crc kubenswrapper[4889]: E1128 07:04:59.131542 4889 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:04:59 crc kubenswrapper[4889]: E1128 07:04:59.131600 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:05:07.131583083 +0000 UTC m=+1030.101817238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "metrics-server-cert" not found Nov 28 07:04:59 crc kubenswrapper[4889]: E1128 07:04:59.131608 4889 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:04:59 crc kubenswrapper[4889]: E1128 07:04:59.131643 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:05:07.131632704 +0000 UTC m=+1030.101866859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "webhook-server-cert" not found Nov 28 07:05:04 crc kubenswrapper[4889]: E1128 07:05:04.914826 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jm7b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-955677c94-mxn8f_openstack-operators(b9d9668a-02e9-4d9f-856e-be23f0484ccf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:05:04 crc kubenswrapper[4889]: E1128 07:05:04.916530 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" podUID="b9d9668a-02e9-4d9f-856e-be23f0484ccf" Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.325163 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" event={"ID":"670339cb-0ec6-48bc-b892-c14ad66849c0","Type":"ContainerStarted","Data":"512828c94c770448b31069097b3a5f5fcf1a27e6b7cba5dc355e26b78fd0295f"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.352313 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" event={"ID":"0ee115df-19fd-4ca6-a087-9f4a56a86378","Type":"ContainerStarted","Data":"e5cec164eb7b7d9754824cb18ea63fe16c9a92edcc4c3e992eddbba657af516a"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.371882 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" event={"ID":"8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d","Type":"ContainerStarted","Data":"2f3815873dca6e2239d69bea32cfe448f17d2f774369bb9f07ad388a77753887"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.393515 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" event={"ID":"b9d9668a-02e9-4d9f-856e-be23f0484ccf","Type":"ContainerStarted","Data":"fb073097a1d29065b99dcbcb729378a56d6a1d73561233e9fc9b2d88577a5209"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.393667 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" Nov 28 07:05:05 crc kubenswrapper[4889]: E1128 07:05:05.400514 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" podUID="b9d9668a-02e9-4d9f-856e-be23f0484ccf" Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.418231 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7" event={"ID":"37909eac-261b-42f4-b85e-14fd8f00c42b","Type":"ContainerStarted","Data":"a7e84b14c46e872fd6fa5ee237ad4bf5915703081d6bdcb5e5cd97b94d6b4490"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.434906 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" event={"ID":"dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a","Type":"ContainerStarted","Data":"9898632ff4206a1134d621e96fe88c48db7c849137f2766c30ae60ca59282645"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.461105 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dwz7" podStartSLOduration=2.533003582 podStartE2EDuration="14.461084638s" podCreationTimestamp="2025-11-28 07:04:51 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.468919943 +0000 UTC m=+1015.439154088" lastFinishedPulling="2025-11-28 07:05:04.397000989 +0000 UTC m=+1027.367235144" observedRunningTime="2025-11-28 07:05:05.456636827 +0000 UTC m=+1028.426870982" watchObservedRunningTime="2025-11-28 07:05:05.461084638 +0000 UTC m=+1028.431318793" Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.464925 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" event={"ID":"af524ba5-acaf-4f33-bb04-6c2818b1cdf5","Type":"ContainerStarted","Data":"b702910b7c2235d1c42dbcfbd762db028f6d432a2d915e10fa032f8df06a201b"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.466566 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" event={"ID":"50833719-605c-4e59-9535-7377eeb99994","Type":"ContainerStarted","Data":"d3df849b81a3a2154a8b29b2aeb4d374f7dba7945fc00b2750b7d29da7d7eeb0"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.492944 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" event={"ID":"178814bc-902e-43d9-a606-c3640477a94d","Type":"ContainerStarted","Data":"d2d6fe8136d796d9d099bbfbdd394adf2f62c4b61d4676af82813b5be3d94159"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.532388 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" event={"ID":"dcd06fe4-e876-4947-b5c6-812381c42b71","Type":"ContainerStarted","Data":"1ae25bb3ef12f0319dc9a2acc7d5711491f5c613cea33b73e619eb9539c83c35"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.556482 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" event={"ID":"1fec2494-e72e-4019-a869-b3080018f75d","Type":"ContainerStarted","Data":"a366e90c20e85179d0c35607a6e74258f7fa5e4d2487ef274e74b69a6ef4cf71"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.567173 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" event={"ID":"d281dca0-e9e1-4e2d-befc-0508ae9421b9","Type":"ContainerStarted","Data":"6fa34e67476e615540705d47cd67c460a8bad73f803388822683c298a3984402"} Nov 28 07:05:05 crc kubenswrapper[4889]: I1128 07:05:05.568120 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" event={"ID":"363ed2cd-915f-4260-8eb7-950ff710b500","Type":"ContainerStarted","Data":"f84b805f86668bf88d9ac88575e9711686ed5ffa86db593bdc029ba73d74affb"} Nov 28 07:05:06 crc kubenswrapper[4889]: I1128 07:05:06.335954 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:05:06 crc kubenswrapper[4889]: I1128 07:05:06.357219 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/559d7ec2-8cd6-4c5c-a844-c7f3953ec021-cert\") pod \"infra-operator-controller-manager-57548d458d-lwdbd\" (UID: \"559d7ec2-8cd6-4c5c-a844-c7f3953ec021\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:05:06 crc kubenswrapper[4889]: I1128 07:05:06.527554 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9fbr5" Nov 28 07:05:06 crc kubenswrapper[4889]: I1128 07:05:06.536975 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:05:06 crc kubenswrapper[4889]: E1128 07:05:06.584136 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" podUID="b9d9668a-02e9-4d9f-856e-be23f0484ccf" Nov 28 07:05:06 crc kubenswrapper[4889]: I1128 07:05:06.942699 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:05:06 crc kubenswrapper[4889]: E1128 07:05:06.942881 4889 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:05:06 crc kubenswrapper[4889]: E1128 07:05:06.942963 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert podName:a9f38b46-2bae-4e2d-8b02-c314b9e8f77a nodeName:}" failed. No retries permitted until 2025-11-28 07:05:22.942939931 +0000 UTC m=+1045.913174136 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" (UID: "a9f38b46-2bae-4e2d-8b02-c314b9e8f77a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:05:07 crc kubenswrapper[4889]: I1128 07:05:07.145199 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:05:07 crc kubenswrapper[4889]: E1128 07:05:07.145491 4889 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:05:07 crc kubenswrapper[4889]: E1128 07:05:07.145701 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:05:23.14568306 +0000 UTC m=+1046.115917215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "webhook-server-cert" not found Nov 28 07:05:07 crc kubenswrapper[4889]: I1128 07:05:07.145749 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:05:07 crc kubenswrapper[4889]: E1128 07:05:07.145900 4889 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:05:07 crc kubenswrapper[4889]: E1128 07:05:07.146377 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs podName:efc28083-2792-41ee-a835-5953afb3070d nodeName:}" failed. No retries permitted until 2025-11-28 07:05:23.146349327 +0000 UTC m=+1046.116583512 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-g24v8" (UID: "efc28083-2792-41ee-a835-5953afb3070d") : secret "metrics-server-cert" not found Nov 28 07:05:07 crc kubenswrapper[4889]: I1128 07:05:07.733766 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd"] Nov 28 07:05:08 crc kubenswrapper[4889]: W1128 07:05:08.341356 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559d7ec2_8cd6_4c5c_a844_c7f3953ec021.slice/crio-348728c9781390113cb25ef2c7f417d5ed8606858494abf98761deeb4120852a WatchSource:0}: Error finding container 348728c9781390113cb25ef2c7f417d5ed8606858494abf98761deeb4120852a: Status 404 returned error can't find the container with id 348728c9781390113cb25ef2c7f417d5ed8606858494abf98761deeb4120852a Nov 28 07:05:08 crc kubenswrapper[4889]: I1128 07:05:08.598850 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" event={"ID":"559d7ec2-8cd6-4c5c-a844-c7f3953ec021","Type":"ContainerStarted","Data":"348728c9781390113cb25ef2c7f417d5ed8606858494abf98761deeb4120852a"} Nov 28 07:05:10 crc kubenswrapper[4889]: I1128 07:05:10.586425 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.650239 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" event={"ID":"178814bc-902e-43d9-a606-c3640477a94d","Type":"ContainerStarted","Data":"d909b9b5f7e37c57d167a70fd1be49c5998e32b93a566f83829155cdd3911031"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.651070 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.654492 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.655116 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" event={"ID":"dcd06fe4-e876-4947-b5c6-812381c42b71","Type":"ContainerStarted","Data":"a3a52df3e8064867e3679adcbf20d814a996768db80cffe7b40c398e91b514bd"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.655562 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.658280 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.658955 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" event={"ID":"8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d","Type":"ContainerStarted","Data":"76d91ee249f57ff9b426dba20ad28f81b6a6f2239b27229940287e71b253e869"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.659237 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.663030 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.664673 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" event={"ID":"b9d9668a-02e9-4d9f-856e-be23f0484ccf","Type":"ContainerStarted","Data":"1c3c7cbc9bd4fc0764484fc976f02b49a2e39eb46a429e2d761d6cf9c801f860"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.669685 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-lbvbd" podStartSLOduration=3.143468308 podStartE2EDuration="21.669671253s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:51.928843885 +0000 UTC m=+1014.899078040" lastFinishedPulling="2025-11-28 07:05:10.45504683 +0000 UTC m=+1033.425280985" observedRunningTime="2025-11-28 07:05:11.666988366 +0000 UTC m=+1034.637222521" watchObservedRunningTime="2025-11-28 07:05:11.669671253 +0000 UTC m=+1034.639905408" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.682436 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" event={"ID":"dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a","Type":"ContainerStarted","Data":"83c9324c9cf9de326183d6130799e28a88feaf98c4b6d30e8a00289797be7df9"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.682816 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.687766 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.694112 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" event={"ID":"1a7dd634-e6b8-435c-963b-e482cc1d0cac","Type":"ContainerStarted","Data":"1f9774b2877741b6145a86ef608dc8fb8c570923db114630da0d0924d409715d"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.694139 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" event={"ID":"1a7dd634-e6b8-435c-963b-e482cc1d0cac","Type":"ContainerStarted","Data":"c360a7bb21ba9cbfe2df48614d49ec29d2faf7637c30de3a87d567cbb2f13032"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.694763 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.699272 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-mxn8f" podStartSLOduration=9.292884336 podStartE2EDuration="21.699254797s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:51.928593729 +0000 UTC m=+1014.898827884" lastFinishedPulling="2025-11-28 07:05:04.33496419 +0000 UTC m=+1027.305198345" observedRunningTime="2025-11-28 07:05:11.698232271 +0000 UTC m=+1034.668466426" watchObservedRunningTime="2025-11-28 07:05:11.699254797 +0000 UTC m=+1034.669488952" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.710981 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" event={"ID":"22d7b246-073e-4b87-81f8-04cb344e317c","Type":"ContainerStarted","Data":"0957b907ee862e354e4b3edf864f8f9b00465b870eab08af4619872d0ba8cadb"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.711031 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" event={"ID":"22d7b246-073e-4b87-81f8-04cb344e317c","Type":"ContainerStarted","Data":"a9c54d0a60ee40a96e3adfd61e2015bccc58957ae4fdb5b9d877020137ee8870"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.711609 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.715052 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" event={"ID":"363ed2cd-915f-4260-8eb7-950ff710b500","Type":"ContainerStarted","Data":"34cbcd9918a30b08e346a8875b873cc07d6e018f8d9041ad591c70a628853bc5"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.716077 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.720502 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.722151 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" event={"ID":"d281dca0-e9e1-4e2d-befc-0508ae9421b9","Type":"ContainerStarted","Data":"c5a879d3d6a081523d42f649897dbbbb7b26353e9232e5f93cb5a7b8ae90cbb7"} Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.722720 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.726674 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.748472 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-k5wp6" podStartSLOduration=3.007376651 podStartE2EDuration="21.748450017s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.119783032 +0000 UTC m=+1015.090017187" lastFinishedPulling="2025-11-28 07:05:10.860856398 +0000 UTC m=+1033.831090553" observedRunningTime="2025-11-28 07:05:11.737436104 +0000 UTC m=+1034.707670289" watchObservedRunningTime="2025-11-28 07:05:11.748450017 +0000 UTC m=+1034.718684192" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.784368 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-svz4w" podStartSLOduration=2.949568437 podStartE2EDuration="21.784346948s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.157754514 +0000 UTC m=+1015.127988669" lastFinishedPulling="2025-11-28 07:05:10.992533025 +0000 UTC m=+1033.962767180" observedRunningTime="2025-11-28 07:05:11.756869186 +0000 UTC m=+1034.727103331" watchObservedRunningTime="2025-11-28 07:05:11.784346948 +0000 UTC m=+1034.754581113" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.793980 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-kvt48" podStartSLOduration=3.217545435 podStartE2EDuration="21.793961376s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:51.928919597 +0000 UTC m=+1014.899153752" lastFinishedPulling="2025-11-28 07:05:10.505335538 +0000 UTC m=+1033.475569693" observedRunningTime="2025-11-28 07:05:11.782647255 +0000 UTC m=+1034.752881410" watchObservedRunningTime="2025-11-28 07:05:11.793961376 +0000 UTC m=+1034.764195531" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.833187 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-qcklj" podStartSLOduration=3.186509895 podStartE2EDuration="21.833164009s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.072510809 +0000 UTC m=+1015.042744964" lastFinishedPulling="2025-11-28 07:05:10.719164923 +0000 UTC m=+1033.689399078" observedRunningTime="2025-11-28 07:05:11.829399855 +0000 UTC m=+1034.799634010" watchObservedRunningTime="2025-11-28 07:05:11.833164009 +0000 UTC m=+1034.803398154" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.854869 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-dr6z8" podStartSLOduration=2.56031013 podStartE2EDuration="21.854852747s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:51.934849584 +0000 UTC m=+1014.905083739" lastFinishedPulling="2025-11-28 07:05:11.229392201 +0000 UTC m=+1034.199626356" observedRunningTime="2025-11-28 07:05:11.853176985 +0000 UTC m=+1034.823411130" watchObservedRunningTime="2025-11-28 07:05:11.854852747 +0000 UTC m=+1034.825086902" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.886400 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" podStartSLOduration=3.705936791 podStartE2EDuration="21.886381839s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.26402482 +0000 UTC m=+1015.234258975" lastFinishedPulling="2025-11-28 07:05:10.444469858 +0000 UTC m=+1033.414704023" observedRunningTime="2025-11-28 07:05:11.881901518 +0000 UTC m=+1034.852135673" watchObservedRunningTime="2025-11-28 07:05:11.886381839 +0000 UTC m=+1034.856615994" Nov 28 07:05:11 crc kubenswrapper[4889]: I1128 07:05:11.912800 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" podStartSLOduration=3.683980296 podStartE2EDuration="21.912781314s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.226224302 +0000 UTC m=+1015.196458457" lastFinishedPulling="2025-11-28 07:05:10.45502532 +0000 UTC m=+1033.425259475" observedRunningTime="2025-11-28 07:05:11.900475039 +0000 UTC m=+1034.870709194" watchObservedRunningTime="2025-11-28 07:05:11.912781314 +0000 UTC m=+1034.883015469" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.737942 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" event={"ID":"af524ba5-acaf-4f33-bb04-6c2818b1cdf5","Type":"ContainerStarted","Data":"d53ad47c1e5a9824ac32e2010a53f8218c1e9a422cd804f5577adc26d7a2543d"} Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.738297 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.740049 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.742302 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" event={"ID":"50833719-605c-4e59-9535-7377eeb99994","Type":"ContainerStarted","Data":"49eaa46dc740c87e5b6a5e1046de75fad1859fa88f6208d50444c4a1da6a30d6"} Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.742546 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.744259 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.744947 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" event={"ID":"1fec2494-e72e-4019-a869-b3080018f75d","Type":"ContainerStarted","Data":"b987588dc8fe8d1c965b1120bb979570183c6413832b026f79b88fa6a676941c"} Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.745475 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.746982 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.752974 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" event={"ID":"670339cb-0ec6-48bc-b892-c14ad66849c0","Type":"ContainerStarted","Data":"edc62447a110ea78dd69cfb810fa9270b305fe5dca18b308179ac815bef2a228"} Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.753148 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.754965 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.758759 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" event={"ID":"0ee115df-19fd-4ca6-a087-9f4a56a86378","Type":"ContainerStarted","Data":"94f7379366682a0ee3a0c99778d991e995fd6f97285083356e877734a8ad4f56"} Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.758820 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.760913 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.790199 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-dp5mk" podStartSLOduration=3.253224621 podStartE2EDuration="22.790183481s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.225947296 +0000 UTC m=+1015.196181461" lastFinishedPulling="2025-11-28 07:05:11.762906166 +0000 UTC m=+1034.733140321" observedRunningTime="2025-11-28 07:05:12.757112931 +0000 UTC m=+1035.727347086" watchObservedRunningTime="2025-11-28 07:05:12.790183481 +0000 UTC m=+1035.760417636" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.791350 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-mlwrn" podStartSLOduration=2.8485937630000002 podStartE2EDuration="22.7913409s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:51.89235989 +0000 UTC m=+1014.862594045" lastFinishedPulling="2025-11-28 07:05:11.835107027 +0000 UTC m=+1034.805341182" observedRunningTime="2025-11-28 07:05:12.781138617 +0000 UTC m=+1035.751372772" watchObservedRunningTime="2025-11-28 07:05:12.7913409 +0000 UTC m=+1035.761575055" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.810540 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-hds8d" podStartSLOduration=2.689335592 podStartE2EDuration="22.810518455s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.114172303 +0000 UTC m=+1015.084406458" lastFinishedPulling="2025-11-28 07:05:12.235355166 +0000 UTC m=+1035.205589321" observedRunningTime="2025-11-28 07:05:12.800939718 +0000 UTC m=+1035.771173873" watchObservedRunningTime="2025-11-28 07:05:12.810518455 +0000 UTC m=+1035.780752610" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.822510 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-g6lns" podStartSLOduration=2.796820398 podStartE2EDuration="22.822473322s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.160046591 +0000 UTC m=+1015.130280756" lastFinishedPulling="2025-11-28 07:05:12.185699525 +0000 UTC m=+1035.155933680" observedRunningTime="2025-11-28 07:05:12.819313924 +0000 UTC m=+1035.789548079" watchObservedRunningTime="2025-11-28 07:05:12.822473322 +0000 UTC m=+1035.792707477" Nov 28 07:05:12 crc kubenswrapper[4889]: I1128 07:05:12.858367 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-rwzxg" podStartSLOduration=3.062816028 podStartE2EDuration="22.858343192s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:51.727206363 +0000 UTC m=+1014.697440518" lastFinishedPulling="2025-11-28 07:05:11.522733537 +0000 UTC m=+1034.492967682" observedRunningTime="2025-11-28 07:05:12.854317092 +0000 UTC m=+1035.824551237" watchObservedRunningTime="2025-11-28 07:05:12.858343192 +0000 UTC m=+1035.828577337" Nov 28 07:05:17 crc kubenswrapper[4889]: I1128 07:05:17.818639 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" event={"ID":"47734956-e3b4-4ca1-8f4b-490b2f861bf0","Type":"ContainerStarted","Data":"005b1bfbd8e79c612f0fdecef6615433fc863ca98b138281df757f9fbd60cd2f"} Nov 28 07:05:17 crc kubenswrapper[4889]: I1128 07:05:17.821976 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" event={"ID":"559d7ec2-8cd6-4c5c-a844-c7f3953ec021","Type":"ContainerStarted","Data":"35a5ee44bb6baf03c65fc87c9dbcf24992e38b7fa0a00c105fa62e8a2ee42dcd"} Nov 28 07:05:17 crc kubenswrapper[4889]: I1128 07:05:17.828057 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" event={"ID":"b531db0a-6f24-4a61-811a-d75de0f59e94","Type":"ContainerStarted","Data":"015549953d84d5c31a803066e02de9cba699bfb6cc7b54f8794a816aeb0eb422"} Nov 28 07:05:17 crc kubenswrapper[4889]: I1128 07:05:17.830833 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" event={"ID":"e81f775c-9ce2-415f-8bd3-ed49458ae893","Type":"ContainerStarted","Data":"4c2986b5096a19955cbe2fea98b9aa31c554da1e49deeef415e4c5840745ecba"} Nov 28 07:05:17 crc kubenswrapper[4889]: I1128 07:05:17.840652 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" event={"ID":"729b51c8-1b36-4716-8c7a-ae23ed249f03","Type":"ContainerStarted","Data":"0710fbe4d2558db5be7723faecc239a374f675f1799e95618537b83e4680c3ef"} Nov 28 07:05:17 crc kubenswrapper[4889]: I1128 07:05:17.849378 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" event={"ID":"f02b81e8-ad8a-445e-8ebd-156f05fdd9e7","Type":"ContainerStarted","Data":"00b328c0644e0c8680f198a125fa426e039b7b651cb4acdec299aa3a20203911"} Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.859800 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" event={"ID":"47734956-e3b4-4ca1-8f4b-490b2f861bf0","Type":"ContainerStarted","Data":"d5b3f3619ff828f123bac87ac23378245c8158c651ac587c5dacdd1b4de6bdf9"} Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.861135 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.862824 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" event={"ID":"559d7ec2-8cd6-4c5c-a844-c7f3953ec021","Type":"ContainerStarted","Data":"f8fec60a66a04d2e579cf04ac8a95dbe87e5f687b7c4e1594fe5a4d3c6f77c66"} Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.863287 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.865007 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" event={"ID":"b531db0a-6f24-4a61-811a-d75de0f59e94","Type":"ContainerStarted","Data":"9b7ef011072f0df925becc721afa18b91e77d442fc77e2735045850b85fd7f29"} Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.865356 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.870575 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" event={"ID":"e81f775c-9ce2-415f-8bd3-ed49458ae893","Type":"ContainerStarted","Data":"ab43fea44cb06b365c23ee68b77faa9a501073f026bf7116964ce14e341ac831"} Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.870759 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.872561 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" event={"ID":"729b51c8-1b36-4716-8c7a-ae23ed249f03","Type":"ContainerStarted","Data":"bdf5496d0af0bca3ad10c8fd2a757972d7b291aaa522d66f74516e04c13822d0"} Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.873407 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.875361 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" event={"ID":"f02b81e8-ad8a-445e-8ebd-156f05fdd9e7","Type":"ContainerStarted","Data":"fc82dd7fe6eae98dc9171e58c9030f804b23ade128f9db100fa3652dc0d484ce"} Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.875844 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.882097 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" podStartSLOduration=3.815467479 podStartE2EDuration="28.882078421s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.249526891 +0000 UTC m=+1015.219761046" lastFinishedPulling="2025-11-28 07:05:17.316137833 +0000 UTC m=+1040.286371988" observedRunningTime="2025-11-28 07:05:18.87924595 +0000 UTC m=+1041.849480105" watchObservedRunningTime="2025-11-28 07:05:18.882078421 +0000 UTC m=+1041.852312586" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.898580 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" podStartSLOduration=3.8841948139999998 podStartE2EDuration="28.89856442s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.259472297 +0000 UTC m=+1015.229706452" lastFinishedPulling="2025-11-28 07:05:17.273841893 +0000 UTC m=+1040.244076058" observedRunningTime="2025-11-28 07:05:18.897413801 +0000 UTC m=+1041.867647946" watchObservedRunningTime="2025-11-28 07:05:18.89856442 +0000 UTC m=+1041.868798575" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.917369 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" podStartSLOduration=4.101466314 podStartE2EDuration="28.917351226s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.487911855 +0000 UTC m=+1015.458146010" lastFinishedPulling="2025-11-28 07:05:17.303796767 +0000 UTC m=+1040.274030922" observedRunningTime="2025-11-28 07:05:18.91511559 +0000 UTC m=+1041.885349755" watchObservedRunningTime="2025-11-28 07:05:18.917351226 +0000 UTC m=+1041.887585391" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.932120 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" podStartSLOduration=20.00361816 podStartE2EDuration="28.932102882s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:05:08.344865449 +0000 UTC m=+1031.315099604" lastFinishedPulling="2025-11-28 07:05:17.273350171 +0000 UTC m=+1040.243584326" observedRunningTime="2025-11-28 07:05:18.929743223 +0000 UTC m=+1041.899977378" watchObservedRunningTime="2025-11-28 07:05:18.932102882 +0000 UTC m=+1041.902337037" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.961582 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" podStartSLOduration=4.207874904 podStartE2EDuration="28.961564483s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.522146674 +0000 UTC m=+1015.492380829" lastFinishedPulling="2025-11-28 07:05:17.275836253 +0000 UTC m=+1040.246070408" observedRunningTime="2025-11-28 07:05:18.956064696 +0000 UTC m=+1041.926298851" watchObservedRunningTime="2025-11-28 07:05:18.961564483 +0000 UTC m=+1041.931798638" Nov 28 07:05:18 crc kubenswrapper[4889]: I1128 07:05:18.980698 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" podStartSLOduration=4.224748093 podStartE2EDuration="28.980671427s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:04:52.517698824 +0000 UTC m=+1015.487932979" lastFinishedPulling="2025-11-28 07:05:17.273622158 +0000 UTC m=+1040.243856313" observedRunningTime="2025-11-28 07:05:18.973809656 +0000 UTC m=+1041.944043821" watchObservedRunningTime="2025-11-28 07:05:18.980671427 +0000 UTC m=+1041.950905592" Nov 28 07:05:21 crc kubenswrapper[4889]: I1128 07:05:21.342865 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hd79s" Nov 28 07:05:21 crc kubenswrapper[4889]: I1128 07:05:21.495193 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-pzwp7" Nov 28 07:05:22 crc kubenswrapper[4889]: I1128 07:05:22.997297 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.004691 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9f38b46-2bae-4e2d-8b02-c314b9e8f77a-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc\" (UID: \"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.201781 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.202108 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.206375 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.206483 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efc28083-2792-41ee-a835-5953afb3070d-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-g24v8\" (UID: \"efc28083-2792-41ee-a835-5953afb3070d\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.274370 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9dnsx" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.283051 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.505577 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wnsfp" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.514122 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.720045 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc"] Nov 28 07:05:23 crc kubenswrapper[4889]: W1128 07:05:23.723220 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9f38b46_2bae_4e2d_8b02_c314b9e8f77a.slice/crio-857dc499162725305d628c46925e6565b1e3f3be7b778043e848a791d07b50eb WatchSource:0}: Error finding container 857dc499162725305d628c46925e6565b1e3f3be7b778043e848a791d07b50eb: Status 404 returned error can't find the container with id 857dc499162725305d628c46925e6565b1e3f3be7b778043e848a791d07b50eb Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.725333 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.923872 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" event={"ID":"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a","Type":"ContainerStarted","Data":"857dc499162725305d628c46925e6565b1e3f3be7b778043e848a791d07b50eb"} Nov 28 07:05:23 crc kubenswrapper[4889]: I1128 07:05:23.959408 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8"] Nov 28 07:05:23 crc kubenswrapper[4889]: W1128 07:05:23.963942 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefc28083_2792_41ee_a835_5953afb3070d.slice/crio-951ea6758a9505e4f6a964c0f7de5717e6050b0ef51ee45fae97baa49cd32d95 WatchSource:0}: Error finding container 951ea6758a9505e4f6a964c0f7de5717e6050b0ef51ee45fae97baa49cd32d95: Status 404 returned error can't find the container with id 951ea6758a9505e4f6a964c0f7de5717e6050b0ef51ee45fae97baa49cd32d95 Nov 28 07:05:24 crc kubenswrapper[4889]: I1128 07:05:24.934680 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" event={"ID":"efc28083-2792-41ee-a835-5953afb3070d","Type":"ContainerStarted","Data":"951ea6758a9505e4f6a964c0f7de5717e6050b0ef51ee45fae97baa49cd32d95"} Nov 28 07:05:26 crc kubenswrapper[4889]: I1128 07:05:26.544422 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lwdbd" Nov 28 07:05:28 crc kubenswrapper[4889]: I1128 07:05:28.782541 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:05:28 crc kubenswrapper[4889]: I1128 07:05:28.782952 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:05:28 crc kubenswrapper[4889]: I1128 07:05:28.783000 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 07:05:28 crc kubenswrapper[4889]: I1128 07:05:28.783581 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bcf61faea8df3b4bedcdbe66375ffe429928fd4ff7747468313822736645149"} pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:05:28 crc kubenswrapper[4889]: I1128 07:05:28.783634 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" containerID="cri-o://8bcf61faea8df3b4bedcdbe66375ffe429928fd4ff7747468313822736645149" gracePeriod=600 Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.989550 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" event={"ID":"efc28083-2792-41ee-a835-5953afb3070d","Type":"ContainerStarted","Data":"bf57f6e605dd83899441ba3a3142121c9698cd407326310350a62fa6d91a67c9"} Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.990123 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.993680 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerID="8bcf61faea8df3b4bedcdbe66375ffe429928fd4ff7747468313822736645149" exitCode=0 Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.993730 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerDied","Data":"8bcf61faea8df3b4bedcdbe66375ffe429928fd4ff7747468313822736645149"} Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.993766 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"5b371f61ff4e58e3c8a1cc2889d70d7351a69170427032ddc9f014086d459fb3"} Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.993784 4889 scope.go:117] "RemoveContainer" containerID="7ebc63c9a59babecd1fd35c9530a11a72ee07b00bf300c1205eb3965dda30903" Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.997613 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" event={"ID":"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a","Type":"ContainerStarted","Data":"59e6502e0af859b69de6d91776117dcd0d88d05707f2bbe5118b59d87c57eabc"} Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.997646 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" event={"ID":"a9f38b46-2bae-4e2d-8b02-c314b9e8f77a","Type":"ContainerStarted","Data":"8981f7f517df644ec520a1ee9d1d6bac41caee290bd4b2f52ad201e3a3f0b4d0"} Nov 28 07:05:30 crc kubenswrapper[4889]: I1128 07:05:30.997751 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:05:31 crc kubenswrapper[4889]: I1128 07:05:31.047128 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" podStartSLOduration=40.047113676 podStartE2EDuration="40.047113676s" podCreationTimestamp="2025-11-28 07:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:05:31.020087656 +0000 UTC m=+1053.990321821" watchObservedRunningTime="2025-11-28 07:05:31.047113676 +0000 UTC m=+1054.017347831" Nov 28 07:05:31 crc kubenswrapper[4889]: I1128 07:05:31.049570 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" podStartSLOduration=34.248461103 podStartE2EDuration="41.049563807s" podCreationTimestamp="2025-11-28 07:04:50 +0000 UTC" firstStartedPulling="2025-11-28 07:05:23.725011717 +0000 UTC m=+1046.695245872" lastFinishedPulling="2025-11-28 07:05:30.526114421 +0000 UTC m=+1053.496348576" observedRunningTime="2025-11-28 07:05:31.041431825 +0000 UTC m=+1054.011665990" watchObservedRunningTime="2025-11-28 07:05:31.049563807 +0000 UTC m=+1054.019797952" Nov 28 07:05:31 crc kubenswrapper[4889]: I1128 07:05:31.372283 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-hmpqd" Nov 28 07:05:31 crc kubenswrapper[4889]: I1128 07:05:31.533060 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-j9vh5" Nov 28 07:05:31 crc kubenswrapper[4889]: I1128 07:05:31.646971 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-tcbth" Nov 28 07:05:31 crc kubenswrapper[4889]: I1128 07:05:31.666744 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-9swrr" Nov 28 07:05:31 crc kubenswrapper[4889]: I1128 07:05:31.678243 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kp8mv" Nov 28 07:05:43 crc kubenswrapper[4889]: I1128 07:05:43.292592 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc" Nov 28 07:05:43 crc kubenswrapper[4889]: I1128 07:05:43.519939 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-g24v8" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.384651 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557f57d995-m28nf"] Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.386661 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.390212 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.390472 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.392595 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.392881 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-d4jtn" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.413249 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-m28nf"] Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.422123 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-bnzkt"] Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.423749 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.425429 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.432457 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-dns-svc\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.432562 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-config\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.432636 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b794b38d-10c2-431f-b605-dd0b2aee5029-config\") pod \"dnsmasq-dns-557f57d995-m28nf\" (UID: \"b794b38d-10c2-431f-b605-dd0b2aee5029\") " pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.432657 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zrqb\" (UniqueName: \"kubernetes.io/projected/9b673033-6071-46e1-b983-27f2b1118a05-kube-api-access-2zrqb\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.432688 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9ntq\" (UniqueName: \"kubernetes.io/projected/b794b38d-10c2-431f-b605-dd0b2aee5029-kube-api-access-w9ntq\") pod \"dnsmasq-dns-557f57d995-m28nf\" (UID: \"b794b38d-10c2-431f-b605-dd0b2aee5029\") " pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.449129 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-bnzkt"] Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.534356 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-dns-svc\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.534422 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-config\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.534470 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b794b38d-10c2-431f-b605-dd0b2aee5029-config\") pod \"dnsmasq-dns-557f57d995-m28nf\" (UID: \"b794b38d-10c2-431f-b605-dd0b2aee5029\") " pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.534492 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zrqb\" (UniqueName: \"kubernetes.io/projected/9b673033-6071-46e1-b983-27f2b1118a05-kube-api-access-2zrqb\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.534522 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9ntq\" (UniqueName: \"kubernetes.io/projected/b794b38d-10c2-431f-b605-dd0b2aee5029-kube-api-access-w9ntq\") pod \"dnsmasq-dns-557f57d995-m28nf\" (UID: \"b794b38d-10c2-431f-b605-dd0b2aee5029\") " pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.535698 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-dns-svc\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.536451 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-config\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.536542 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b794b38d-10c2-431f-b605-dd0b2aee5029-config\") pod \"dnsmasq-dns-557f57d995-m28nf\" (UID: \"b794b38d-10c2-431f-b605-dd0b2aee5029\") " pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.576246 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zrqb\" (UniqueName: \"kubernetes.io/projected/9b673033-6071-46e1-b983-27f2b1118a05-kube-api-access-2zrqb\") pod \"dnsmasq-dns-766fdc659c-bnzkt\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.590872 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9ntq\" (UniqueName: \"kubernetes.io/projected/b794b38d-10c2-431f-b605-dd0b2aee5029-kube-api-access-w9ntq\") pod \"dnsmasq-dns-557f57d995-m28nf\" (UID: \"b794b38d-10c2-431f-b605-dd0b2aee5029\") " pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.711108 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:00 crc kubenswrapper[4889]: I1128 07:06:00.774872 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:01 crc kubenswrapper[4889]: I1128 07:06:01.212407 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-m28nf"] Nov 28 07:06:01 crc kubenswrapper[4889]: I1128 07:06:01.235372 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f57d995-m28nf" event={"ID":"b794b38d-10c2-431f-b605-dd0b2aee5029","Type":"ContainerStarted","Data":"c47a114d39a950ce2e6ff8d0dd0ba27c57197490317fa6fe619059a824d092ea"} Nov 28 07:06:01 crc kubenswrapper[4889]: W1128 07:06:01.261616 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b673033_6071_46e1_b983_27f2b1118a05.slice/crio-0ac8b0efa4d8dab916e63a6e14b5964ab5af14fd6fccafdb2ad3f903db634995 WatchSource:0}: Error finding container 0ac8b0efa4d8dab916e63a6e14b5964ab5af14fd6fccafdb2ad3f903db634995: Status 404 returned error can't find the container with id 0ac8b0efa4d8dab916e63a6e14b5964ab5af14fd6fccafdb2ad3f903db634995 Nov 28 07:06:01 crc kubenswrapper[4889]: I1128 07:06:01.263252 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-bnzkt"] Nov 28 07:06:02 crc kubenswrapper[4889]: I1128 07:06:02.269824 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" event={"ID":"9b673033-6071-46e1-b983-27f2b1118a05","Type":"ContainerStarted","Data":"0ac8b0efa4d8dab916e63a6e14b5964ab5af14fd6fccafdb2ad3f903db634995"} Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.172785 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-m28nf"] Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.198580 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-trr68"] Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.199954 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.217321 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-trr68"] Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.292018 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwl7\" (UniqueName: \"kubernetes.io/projected/f0cd5dd3-a98a-433d-bcfa-f7276759f987-kube-api-access-ggwl7\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.292061 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-dns-svc\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.292138 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-config\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.393543 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-config\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.393629 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwl7\" (UniqueName: \"kubernetes.io/projected/f0cd5dd3-a98a-433d-bcfa-f7276759f987-kube-api-access-ggwl7\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.393652 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-dns-svc\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.394678 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-dns-svc\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.395014 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-config\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.426603 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwl7\" (UniqueName: \"kubernetes.io/projected/f0cd5dd3-a98a-433d-bcfa-f7276759f987-kube-api-access-ggwl7\") pod \"dnsmasq-dns-57dc4c6697-trr68\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.525196 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-bnzkt"] Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.550900 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.567579 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kthfj"] Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.576880 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.583828 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kthfj"] Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.602366 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-dns-svc\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.602459 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-config\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.602490 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6qfc\" (UniqueName: \"kubernetes.io/projected/75ff0b9c-a6fd-410f-b862-1b373f720e90-kube-api-access-s6qfc\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.703584 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-dns-svc\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.704754 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-dns-svc\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.704976 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-config\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.705500 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-config\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.705013 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6qfc\" (UniqueName: \"kubernetes.io/projected/75ff0b9c-a6fd-410f-b862-1b373f720e90-kube-api-access-s6qfc\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.730933 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6qfc\" (UniqueName: \"kubernetes.io/projected/75ff0b9c-a6fd-410f-b862-1b373f720e90-kube-api-access-s6qfc\") pod \"dnsmasq-dns-8446fd7c75-kthfj\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:03 crc kubenswrapper[4889]: I1128 07:06:03.908551 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.129477 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-trr68"] Nov 28 07:06:04 crc kubenswrapper[4889]: W1128 07:06:04.137984 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0cd5dd3_a98a_433d_bcfa_f7276759f987.slice/crio-74fcc2823104e11e03777e4108c1f25a899f88053c018d1cc59706f4de77b75f WatchSource:0}: Error finding container 74fcc2823104e11e03777e4108c1f25a899f88053c018d1cc59706f4de77b75f: Status 404 returned error can't find the container with id 74fcc2823104e11e03777e4108c1f25a899f88053c018d1cc59706f4de77b75f Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.294358 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc4c6697-trr68" event={"ID":"f0cd5dd3-a98a-433d-bcfa-f7276759f987","Type":"ContainerStarted","Data":"74fcc2823104e11e03777e4108c1f25a899f88053c018d1cc59706f4de77b75f"} Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.339296 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.340593 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.355377 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.355523 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.355686 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6b6gx" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.355832 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.355901 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.355992 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.362265 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.367138 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.381321 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kthfj"] Nov 28 07:06:04 crc kubenswrapper[4889]: W1128 07:06:04.397117 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75ff0b9c_a6fd_410f_b862_1b373f720e90.slice/crio-380a081e214c14f0f8a1327ee2d5fa7014b0b2a7d3c0509938a98c4a7035132b WatchSource:0}: Error finding container 380a081e214c14f0f8a1327ee2d5fa7014b0b2a7d3c0509938a98c4a7035132b: Status 404 returned error can't find the container with id 380a081e214c14f0f8a1327ee2d5fa7014b0b2a7d3c0509938a98c4a7035132b Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416242 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416295 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416321 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-server-conf\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416381 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416431 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdsnr\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-kube-api-access-jdsnr\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416458 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90d501b3-ad2c-4fb8-814d-411dc2a11f20-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416718 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416887 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90d501b3-ad2c-4fb8-814d-411dc2a11f20-pod-info\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416906 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.416964 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.417027 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.519212 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.519288 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdsnr\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-kube-api-access-jdsnr\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.519333 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90d501b3-ad2c-4fb8-814d-411dc2a11f20-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.519356 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.519433 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90d501b3-ad2c-4fb8-814d-411dc2a11f20-pod-info\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.519452 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.519948 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.520010 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.520034 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.520067 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.520143 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.520185 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-server-conf\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.520471 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.520903 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.521066 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.521246 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-server-conf\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.521283 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.528426 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90d501b3-ad2c-4fb8-814d-411dc2a11f20-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.528447 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.528529 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.540048 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdsnr\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-kube-api-access-jdsnr\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.545016 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90d501b3-ad2c-4fb8-814d-411dc2a11f20-pod-info\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.556872 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.671872 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.695639 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.696907 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.698516 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dgm7g" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.698771 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.707188 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.707231 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.707965 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.708072 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.708108 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.718093 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.722782 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.722821 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.722869 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.722930 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.722957 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.722972 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.722987 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml62b\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-kube-api-access-ml62b\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.723015 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.723035 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.723088 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.723141 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825218 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825305 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825359 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825401 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825417 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825436 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml62b\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-kube-api-access-ml62b\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825454 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825480 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825499 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825518 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825533 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.826654 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.827585 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.828856 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.835097 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.835620 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.836093 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.836728 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.825543 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.846523 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml62b\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-kube-api-access-ml62b\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.854497 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.879677 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:04 crc kubenswrapper[4889]: I1128 07:06:04.893159 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.076256 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.190344 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.313576 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" event={"ID":"75ff0b9c-a6fd-410f-b862-1b373f720e90","Type":"ContainerStarted","Data":"380a081e214c14f0f8a1327ee2d5fa7014b0b2a7d3c0509938a98c4a7035132b"} Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.317194 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90d501b3-ad2c-4fb8-814d-411dc2a11f20","Type":"ContainerStarted","Data":"40f605471f0a69da83e1e1c311fb5c96870e596936fe4dd2f45833417c3d801c"} Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.557589 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:06:05 crc kubenswrapper[4889]: W1128 07:06:05.592804 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b744978_786e_4ab0_8a5c_1e8e3f9a2809.slice/crio-a09623258db225ca42b69afb7d249e2b7bcbc3fd02bf396bea4cd9a6c00a7e4c WatchSource:0}: Error finding container a09623258db225ca42b69afb7d249e2b7bcbc3fd02bf396bea4cd9a6c00a7e4c: Status 404 returned error can't find the container with id a09623258db225ca42b69afb7d249e2b7bcbc3fd02bf396bea4cd9a6c00a7e4c Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.897281 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.898966 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.906222 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.908152 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.908316 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7khg5" Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.908427 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.910828 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 07:06:05 crc kubenswrapper[4889]: I1128 07:06:05.919970 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.055250 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxss8\" (UniqueName: \"kubernetes.io/projected/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kube-api-access-sxss8\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.055292 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.055323 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.055347 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.055381 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.055405 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-default\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.055431 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.055446 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kolla-config\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.156510 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxss8\" (UniqueName: \"kubernetes.io/projected/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kube-api-access-sxss8\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.156550 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.156578 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.156602 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.156638 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.156660 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-default\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.156737 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.156757 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kolla-config\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.157817 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kolla-config\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.159465 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.159734 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.160324 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-default\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.160508 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.168793 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.176007 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.203592 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxss8\" (UniqueName: \"kubernetes.io/projected/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kube-api-access-sxss8\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.218695 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.237530 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 07:06:06 crc kubenswrapper[4889]: I1128 07:06:06.345038 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b744978-786e-4ab0-8a5c-1e8e3f9a2809","Type":"ContainerStarted","Data":"a09623258db225ca42b69afb7d249e2b7bcbc3fd02bf396bea4cd9a6c00a7e4c"} Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.239203 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.240485 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.242641 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.242895 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-w55w9" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.243080 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.250161 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.259540 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.381511 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.381558 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.381609 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.381626 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.381669 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.381692 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.381762 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zsp\" (UniqueName: \"kubernetes.io/projected/ecf7fcae-8493-4333-96c4-d4692a144187-kube-api-access-p2zsp\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.381785 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.488670 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.488985 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.489033 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.489058 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.489106 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2zsp\" (UniqueName: \"kubernetes.io/projected/ecf7fcae-8493-4333-96c4-d4692a144187-kube-api-access-p2zsp\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.489130 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.489173 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.489191 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.500917 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.504372 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.505442 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.505481 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.514843 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.522229 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.539493 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.549143 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2zsp\" (UniqueName: \"kubernetes.io/projected/ecf7fcae-8493-4333-96c4-d4692a144187-kube-api-access-p2zsp\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.586118 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.605824 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.607125 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.618829 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-v6mv5" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.619171 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.619343 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.619419 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.693160 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-kolla-config\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.693212 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.693247 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzx5x\" (UniqueName: \"kubernetes.io/projected/5276ecd4-549a-4a41-94be-6408535b2492-kube-api-access-tzx5x\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.693279 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-config-data\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.693326 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.794909 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.795048 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-kolla-config\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.795068 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.796456 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-kolla-config\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.796553 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzx5x\" (UniqueName: \"kubernetes.io/projected/5276ecd4-549a-4a41-94be-6408535b2492-kube-api-access-tzx5x\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.796606 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-config-data\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.797663 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-config-data\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.798680 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.798865 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.820412 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzx5x\" (UniqueName: \"kubernetes.io/projected/5276ecd4-549a-4a41-94be-6408535b2492-kube-api-access-tzx5x\") pod \"memcached-0\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " pod="openstack/memcached-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.875665 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:07 crc kubenswrapper[4889]: I1128 07:06:07.950470 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 07:06:09 crc kubenswrapper[4889]: I1128 07:06:09.368357 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:06:09 crc kubenswrapper[4889]: I1128 07:06:09.369508 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:06:09 crc kubenswrapper[4889]: I1128 07:06:09.371622 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nhw9k" Nov 28 07:06:09 crc kubenswrapper[4889]: I1128 07:06:09.380358 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:06:09 crc kubenswrapper[4889]: I1128 07:06:09.421337 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scglv\" (UniqueName: \"kubernetes.io/projected/9b3de373-d67f-4cc7-ac6b-43b4b3f94242-kube-api-access-scglv\") pod \"kube-state-metrics-0\" (UID: \"9b3de373-d67f-4cc7-ac6b-43b4b3f94242\") " pod="openstack/kube-state-metrics-0" Nov 28 07:06:09 crc kubenswrapper[4889]: I1128 07:06:09.522567 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scglv\" (UniqueName: \"kubernetes.io/projected/9b3de373-d67f-4cc7-ac6b-43b4b3f94242-kube-api-access-scglv\") pod \"kube-state-metrics-0\" (UID: \"9b3de373-d67f-4cc7-ac6b-43b4b3f94242\") " pod="openstack/kube-state-metrics-0" Nov 28 07:06:09 crc kubenswrapper[4889]: I1128 07:06:09.552906 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scglv\" (UniqueName: \"kubernetes.io/projected/9b3de373-d67f-4cc7-ac6b-43b4b3f94242-kube-api-access-scglv\") pod \"kube-state-metrics-0\" (UID: \"9b3de373-d67f-4cc7-ac6b-43b4b3f94242\") " pod="openstack/kube-state-metrics-0" Nov 28 07:06:09 crc kubenswrapper[4889]: I1128 07:06:09.742965 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.259025 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.267993 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.276041 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.276661 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gq287" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.276808 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.276813 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.276833 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.278016 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.386984 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.387063 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx55z\" (UniqueName: \"kubernetes.io/projected/a92a932b-ef66-408c-883e-99412a94d0da-kube-api-access-dx55z\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.387135 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.387166 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.387225 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.387258 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-config\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.387315 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a92a932b-ef66-408c-883e-99412a94d0da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.387339 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489295 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489336 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx55z\" (UniqueName: \"kubernetes.io/projected/a92a932b-ef66-408c-883e-99412a94d0da-kube-api-access-dx55z\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489366 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489390 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489419 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489442 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-config\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489466 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a92a932b-ef66-408c-883e-99412a94d0da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489482 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.489701 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.490313 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a92a932b-ef66-408c-883e-99412a94d0da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.490826 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-config\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.491177 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.504098 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.504132 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.504592 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.508110 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx55z\" (UniqueName: \"kubernetes.io/projected/a92a932b-ef66-408c-883e-99412a94d0da-kube-api-access-dx55z\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.522850 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:13 crc kubenswrapper[4889]: I1128 07:06:13.594110 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.145923 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dlfmr"] Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.147352 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.150203 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.150753 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-znpgr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.155175 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.210447 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dlfmr"] Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.220180 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-log-ovn\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.220363 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-ovn-controller-tls-certs\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.220438 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run-ovn\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.220463 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2x7\" (UniqueName: \"kubernetes.io/projected/723ca26e-f925-47cc-92e3-998ff36f3e92-kube-api-access-8z2x7\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.220492 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.220551 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-combined-ca-bundle\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.220582 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/723ca26e-f925-47cc-92e3-998ff36f3e92-scripts\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.241441 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-d2mhk"] Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.249609 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-d2mhk"] Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.249738 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.321888 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5scv\" (UniqueName: \"kubernetes.io/projected/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-kube-api-access-k5scv\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.321938 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-run\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.321963 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-log\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322098 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-ovn-controller-tls-certs\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322157 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run-ovn\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322177 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2x7\" (UniqueName: \"kubernetes.io/projected/723ca26e-f925-47cc-92e3-998ff36f3e92-kube-api-access-8z2x7\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322198 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-etc-ovs\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322369 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-lib\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322420 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322458 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-combined-ca-bundle\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322478 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/723ca26e-f925-47cc-92e3-998ff36f3e92-scripts\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322636 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-scripts\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322668 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-log-ovn\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322795 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run-ovn\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322917 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.322985 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-log-ovn\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.325339 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-ovn-controller-tls-certs\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.325628 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-combined-ca-bundle\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.335178 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/723ca26e-f925-47cc-92e3-998ff36f3e92-scripts\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.340004 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2x7\" (UniqueName: \"kubernetes.io/projected/723ca26e-f925-47cc-92e3-998ff36f3e92-kube-api-access-8z2x7\") pod \"ovn-controller-dlfmr\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.423868 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-lib\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.423972 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-scripts\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.423996 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5scv\" (UniqueName: \"kubernetes.io/projected/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-kube-api-access-k5scv\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.424015 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-run\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.424032 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-log\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.424059 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-etc-ovs\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.425043 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-run\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.425138 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-log\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.425255 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-etc-ovs\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.425636 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-lib\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.426566 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-scripts\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.441998 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5scv\" (UniqueName: \"kubernetes.io/projected/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-kube-api-access-k5scv\") pod \"ovn-controller-ovs-d2mhk\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.516278 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:15 crc kubenswrapper[4889]: I1128 07:06:15.577438 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.159192 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.161693 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.164021 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.164358 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zxm26" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.164424 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.164829 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.183038 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.236979 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.237034 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcksq\" (UniqueName: \"kubernetes.io/projected/7c960973-a307-4a8a-9fe6-885450c512e0-kube-api-access-kcksq\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.237120 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.237146 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.237176 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.237232 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.237258 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.237288 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.338614 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.338658 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.338682 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.338698 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.338756 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.338782 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.338862 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.338886 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcksq\" (UniqueName: \"kubernetes.io/projected/7c960973-a307-4a8a-9fe6-885450c512e0-kube-api-access-kcksq\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.339529 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.339883 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.345908 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.348399 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.348789 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.353633 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.355388 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcksq\" (UniqueName: \"kubernetes.io/projected/7c960973-a307-4a8a-9fe6-885450c512e0-kube-api-access-kcksq\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.361091 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.364102 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:16 crc kubenswrapper[4889]: I1128 07:06:16.486328 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.027954 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.028538 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdsnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(90d501b3-ad2c-4fb8-814d-411dc2a11f20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.029787 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.539829 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46\\\"\"" pod="openstack/rabbitmq-server-0" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.674296 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.674746 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6qfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8446fd7c75-kthfj_openstack(75ff0b9c-a6fd-410f-b862-1b373f720e90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.675947 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" podUID="75ff0b9c-a6fd-410f-b862-1b373f720e90" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.694969 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.695114 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9ntq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-557f57d995-m28nf_openstack(b794b38d-10c2-431f-b605-dd0b2aee5029): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.696303 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-557f57d995-m28nf" podUID="b794b38d-10c2-431f-b605-dd0b2aee5029" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.722922 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.723254 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggwl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57dc4c6697-trr68_openstack(f0cd5dd3-a98a-433d-bcfa-f7276759f987): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.724753 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57dc4c6697-trr68" podUID="f0cd5dd3-a98a-433d-bcfa-f7276759f987" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.731994 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.732139 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zrqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-766fdc659c-bnzkt_openstack(9b673033-6071-46e1-b983-27f2b1118a05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:06:29 crc kubenswrapper[4889]: E1128 07:06:29.733303 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" podUID="9b673033-6071-46e1-b983-27f2b1118a05" Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.035072 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:06:30 crc kubenswrapper[4889]: W1128 07:06:30.047317 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecf7fcae_8493_4333_96c4_d4692a144187.slice/crio-f9d98a4cd9c523561c1569cff3e3e23fb5469b8845e4f479b87c6cfd2df72bd9 WatchSource:0}: Error finding container f9d98a4cd9c523561c1569cff3e3e23fb5469b8845e4f479b87c6cfd2df72bd9: Status 404 returned error can't find the container with id f9d98a4cd9c523561c1569cff3e3e23fb5469b8845e4f479b87c6cfd2df72bd9 Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.362940 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dlfmr"] Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.371249 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.399376 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.416111 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:06:30 crc kubenswrapper[4889]: W1128 07:06:30.460556 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92a932b_ef66_408c_883e_99412a94d0da.slice/crio-296f76de673da2298bf4e685ff9c9657b82484245813d137ff9d0927ade544c1 WatchSource:0}: Error finding container 296f76de673da2298bf4e685ff9c9657b82484245813d137ff9d0927ade544c1: Status 404 returned error can't find the container with id 296f76de673da2298bf4e685ff9c9657b82484245813d137ff9d0927ade544c1 Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.461094 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.544557 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dlfmr" event={"ID":"723ca26e-f925-47cc-92e3-998ff36f3e92","Type":"ContainerStarted","Data":"2dfd04615046dca43385fe76342cc99d482d19fabfb4717506853ac27d584148"} Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.546384 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9b3de373-d67f-4cc7-ac6b-43b4b3f94242","Type":"ContainerStarted","Data":"e04d5bc5f0442635252a97bc3e08c45cd5a6a202b5ac5032984894eb9d8a279d"} Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.547425 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5276ecd4-549a-4a41-94be-6408535b2492","Type":"ContainerStarted","Data":"1798b04903f2bd8cb51a6b9f815fc819b7eb46e53c63efd94c60f6f0c76ccf4c"} Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.548658 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ecf7fcae-8493-4333-96c4-d4692a144187","Type":"ContainerStarted","Data":"f9d98a4cd9c523561c1569cff3e3e23fb5469b8845e4f479b87c6cfd2df72bd9"} Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.550649 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b4be180d-c2ba-47ad-964d-18e7b1c12b2b","Type":"ContainerStarted","Data":"36378c077ac6da636e5fca8eba4ba7a3b05d4dda956e4f47e0a16f7cf47f60c7"} Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.553393 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a92a932b-ef66-408c-883e-99412a94d0da","Type":"ContainerStarted","Data":"296f76de673da2298bf4e685ff9c9657b82484245813d137ff9d0927ade544c1"} Nov 28 07:06:30 crc kubenswrapper[4889]: E1128 07:06:30.556809 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627\\\"\"" pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" podUID="75ff0b9c-a6fd-410f-b862-1b373f720e90" Nov 28 07:06:30 crc kubenswrapper[4889]: E1128 07:06:30.557074 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627\\\"\"" pod="openstack/dnsmasq-dns-57dc4c6697-trr68" podUID="f0cd5dd3-a98a-433d-bcfa-f7276759f987" Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.669614 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.766094 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-d2mhk"] Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.950197 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:30 crc kubenswrapper[4889]: I1128 07:06:30.955699 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.036277 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-config\") pod \"9b673033-6071-46e1-b983-27f2b1118a05\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.036402 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zrqb\" (UniqueName: \"kubernetes.io/projected/9b673033-6071-46e1-b983-27f2b1118a05-kube-api-access-2zrqb\") pod \"9b673033-6071-46e1-b983-27f2b1118a05\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.036463 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-dns-svc\") pod \"9b673033-6071-46e1-b983-27f2b1118a05\" (UID: \"9b673033-6071-46e1-b983-27f2b1118a05\") " Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.036521 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9ntq\" (UniqueName: \"kubernetes.io/projected/b794b38d-10c2-431f-b605-dd0b2aee5029-kube-api-access-w9ntq\") pod \"b794b38d-10c2-431f-b605-dd0b2aee5029\" (UID: \"b794b38d-10c2-431f-b605-dd0b2aee5029\") " Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.036557 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b794b38d-10c2-431f-b605-dd0b2aee5029-config\") pod \"b794b38d-10c2-431f-b605-dd0b2aee5029\" (UID: \"b794b38d-10c2-431f-b605-dd0b2aee5029\") " Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.036799 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-config" (OuterVolumeSpecName: "config") pod "9b673033-6071-46e1-b983-27f2b1118a05" (UID: "9b673033-6071-46e1-b983-27f2b1118a05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.037099 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b673033-6071-46e1-b983-27f2b1118a05" (UID: "9b673033-6071-46e1-b983-27f2b1118a05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.037496 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.037512 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b673033-6071-46e1-b983-27f2b1118a05-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.039171 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b794b38d-10c2-431f-b605-dd0b2aee5029-config" (OuterVolumeSpecName: "config") pod "b794b38d-10c2-431f-b605-dd0b2aee5029" (UID: "b794b38d-10c2-431f-b605-dd0b2aee5029"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.042200 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b673033-6071-46e1-b983-27f2b1118a05-kube-api-access-2zrqb" (OuterVolumeSpecName: "kube-api-access-2zrqb") pod "9b673033-6071-46e1-b983-27f2b1118a05" (UID: "9b673033-6071-46e1-b983-27f2b1118a05"). InnerVolumeSpecName "kube-api-access-2zrqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.042345 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b794b38d-10c2-431f-b605-dd0b2aee5029-kube-api-access-w9ntq" (OuterVolumeSpecName: "kube-api-access-w9ntq") pod "b794b38d-10c2-431f-b605-dd0b2aee5029" (UID: "b794b38d-10c2-431f-b605-dd0b2aee5029"). InnerVolumeSpecName "kube-api-access-w9ntq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.139353 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zrqb\" (UniqueName: \"kubernetes.io/projected/9b673033-6071-46e1-b983-27f2b1118a05-kube-api-access-2zrqb\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.139395 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9ntq\" (UniqueName: \"kubernetes.io/projected/b794b38d-10c2-431f-b605-dd0b2aee5029-kube-api-access-w9ntq\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.139408 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b794b38d-10c2-431f-b605-dd0b2aee5029-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.565264 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.565285 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766fdc659c-bnzkt" event={"ID":"9b673033-6071-46e1-b983-27f2b1118a05","Type":"ContainerDied","Data":"0ac8b0efa4d8dab916e63a6e14b5964ab5af14fd6fccafdb2ad3f903db634995"} Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.568678 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c960973-a307-4a8a-9fe6-885450c512e0","Type":"ContainerStarted","Data":"8f30b580ebf20094bee3cbfc1ade77b8f39a86a7ae209e023090f5b26c44dd50"} Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.571557 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b744978-786e-4ab0-8a5c-1e8e3f9a2809","Type":"ContainerStarted","Data":"278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60"} Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.574170 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f57d995-m28nf" event={"ID":"b794b38d-10c2-431f-b605-dd0b2aee5029","Type":"ContainerDied","Data":"c47a114d39a950ce2e6ff8d0dd0ba27c57197490317fa6fe619059a824d092ea"} Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.574226 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f57d995-m28nf" Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.575901 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2mhk" event={"ID":"d69857d8-b0ca-49bd-9d89-3ad02ec7adea","Type":"ContainerStarted","Data":"1f976e7556fe2f100def71ad7f12a71c2d6fa26ca2096d5c5c38e59e17b7c44c"} Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.609172 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-bnzkt"] Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.618011 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-bnzkt"] Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.633450 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-m28nf"] Nov 28 07:06:31 crc kubenswrapper[4889]: I1128 07:06:31.639200 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-m28nf"] Nov 28 07:06:33 crc kubenswrapper[4889]: I1128 07:06:33.345086 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b673033-6071-46e1-b983-27f2b1118a05" path="/var/lib/kubelet/pods/9b673033-6071-46e1-b983-27f2b1118a05/volumes" Nov 28 07:06:33 crc kubenswrapper[4889]: I1128 07:06:33.346626 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b794b38d-10c2-431f-b605-dd0b2aee5029" path="/var/lib/kubelet/pods/b794b38d-10c2-431f-b605-dd0b2aee5029/volumes" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.805038 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xg58q"] Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.806368 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.808340 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.814329 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xg58q"] Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.924135 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovs-rundir\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.924190 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-combined-ca-bundle\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.924229 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovn-rundir\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.924253 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5deb3d-df4a-48e4-844b-35247485825a-config\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.924298 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkkcz\" (UniqueName: \"kubernetes.io/projected/fd5deb3d-df4a-48e4-844b-35247485825a-kube-api-access-kkkcz\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.924315 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.971295 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-trr68"] Nov 28 07:06:35 crc kubenswrapper[4889]: I1128 07:06:35.995947 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b7ccdcb4f-jzrl6"] Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.001158 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.004188 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.026493 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovs-rundir\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.027070 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovs-rundir\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.028808 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-combined-ca-bundle\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.029892 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovn-rundir\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.029985 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5deb3d-df4a-48e4-844b-35247485825a-config\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.030204 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovn-rundir\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.033874 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkkcz\" (UniqueName: \"kubernetes.io/projected/fd5deb3d-df4a-48e4-844b-35247485825a-kube-api-access-kkkcz\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.036964 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-combined-ca-bundle\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.038351 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5deb3d-df4a-48e4-844b-35247485825a-config\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.040157 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b7ccdcb4f-jzrl6"] Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.051409 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkkcz\" (UniqueName: \"kubernetes.io/projected/fd5deb3d-df4a-48e4-844b-35247485825a-kube-api-access-kkkcz\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.055043 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.066963 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xg58q\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.138100 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.156630 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-ovsdbserver-sb\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.156774 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqx8f\" (UniqueName: \"kubernetes.io/projected/a5fafd05-b706-4241-a945-78c9b14aa439-kube-api-access-rqx8f\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.156826 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-config\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.156903 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-dns-svc\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.168897 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kthfj"] Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.229860 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bd7c66845-ljrtx"] Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.231260 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.234082 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.240273 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bd7c66845-ljrtx"] Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.260564 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqx8f\" (UniqueName: \"kubernetes.io/projected/a5fafd05-b706-4241-a945-78c9b14aa439-kube-api-access-rqx8f\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.260604 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-config\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.260726 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-dns-svc\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.260804 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-ovsdbserver-sb\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.261968 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-config\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.262302 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-ovsdbserver-sb\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.262464 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-dns-svc\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.306933 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqx8f\" (UniqueName: \"kubernetes.io/projected/a5fafd05-b706-4241-a945-78c9b14aa439-kube-api-access-rqx8f\") pod \"dnsmasq-dns-b7ccdcb4f-jzrl6\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.362638 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbwgf\" (UniqueName: \"kubernetes.io/projected/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-kube-api-access-vbwgf\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.362745 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-sb\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.362771 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-config\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.362816 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-dns-svc\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.362845 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.427477 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.470550 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbwgf\" (UniqueName: \"kubernetes.io/projected/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-kube-api-access-vbwgf\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.470641 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-sb\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.470672 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-config\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.470722 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-dns-svc\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.470755 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.471692 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.474078 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-sb\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.475835 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-config\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.476581 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-dns-svc\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.507871 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbwgf\" (UniqueName: \"kubernetes.io/projected/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-kube-api-access-vbwgf\") pod \"dnsmasq-dns-5bd7c66845-ljrtx\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.563895 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.982831 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:36 crc kubenswrapper[4889]: I1128 07:06:36.990958 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.081086 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-dns-svc\") pod \"75ff0b9c-a6fd-410f-b862-1b373f720e90\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.081244 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-config\") pod \"75ff0b9c-a6fd-410f-b862-1b373f720e90\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.081579 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6qfc\" (UniqueName: \"kubernetes.io/projected/75ff0b9c-a6fd-410f-b862-1b373f720e90-kube-api-access-s6qfc\") pod \"75ff0b9c-a6fd-410f-b862-1b373f720e90\" (UID: \"75ff0b9c-a6fd-410f-b862-1b373f720e90\") " Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.081637 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-dns-svc\") pod \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.081675 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-config\") pod \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.082253 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggwl7\" (UniqueName: \"kubernetes.io/projected/f0cd5dd3-a98a-433d-bcfa-f7276759f987-kube-api-access-ggwl7\") pod \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\" (UID: \"f0cd5dd3-a98a-433d-bcfa-f7276759f987\") " Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.082490 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-config" (OuterVolumeSpecName: "config") pod "75ff0b9c-a6fd-410f-b862-1b373f720e90" (UID: "75ff0b9c-a6fd-410f-b862-1b373f720e90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.082621 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75ff0b9c-a6fd-410f-b862-1b373f720e90" (UID: "75ff0b9c-a6fd-410f-b862-1b373f720e90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.083082 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-config" (OuterVolumeSpecName: "config") pod "f0cd5dd3-a98a-433d-bcfa-f7276759f987" (UID: "f0cd5dd3-a98a-433d-bcfa-f7276759f987"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.083820 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0cd5dd3-a98a-433d-bcfa-f7276759f987" (UID: "f0cd5dd3-a98a-433d-bcfa-f7276759f987"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.083515 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.084035 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff0b9c-a6fd-410f-b862-1b373f720e90-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.085435 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ff0b9c-a6fd-410f-b862-1b373f720e90-kube-api-access-s6qfc" (OuterVolumeSpecName: "kube-api-access-s6qfc") pod "75ff0b9c-a6fd-410f-b862-1b373f720e90" (UID: "75ff0b9c-a6fd-410f-b862-1b373f720e90"). InnerVolumeSpecName "kube-api-access-s6qfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.085817 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0cd5dd3-a98a-433d-bcfa-f7276759f987-kube-api-access-ggwl7" (OuterVolumeSpecName: "kube-api-access-ggwl7") pod "f0cd5dd3-a98a-433d-bcfa-f7276759f987" (UID: "f0cd5dd3-a98a-433d-bcfa-f7276759f987"). InnerVolumeSpecName "kube-api-access-ggwl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.186217 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6qfc\" (UniqueName: \"kubernetes.io/projected/75ff0b9c-a6fd-410f-b862-1b373f720e90-kube-api-access-s6qfc\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.186248 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.186261 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0cd5dd3-a98a-433d-bcfa-f7276759f987-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.186272 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggwl7\" (UniqueName: \"kubernetes.io/projected/f0cd5dd3-a98a-433d-bcfa-f7276759f987-kube-api-access-ggwl7\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.635165 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc4c6697-trr68" event={"ID":"f0cd5dd3-a98a-433d-bcfa-f7276759f987","Type":"ContainerDied","Data":"74fcc2823104e11e03777e4108c1f25a899f88053c018d1cc59706f4de77b75f"} Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.635369 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc4c6697-trr68" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.637087 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" event={"ID":"75ff0b9c-a6fd-410f-b862-1b373f720e90","Type":"ContainerDied","Data":"380a081e214c14f0f8a1327ee2d5fa7014b0b2a7d3c0509938a98c4a7035132b"} Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.637203 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446fd7c75-kthfj" Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.646480 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xg58q"] Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.717989 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kthfj"] Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.724856 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kthfj"] Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.741013 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-trr68"] Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.747825 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-trr68"] Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.753538 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bd7c66845-ljrtx"] Nov 28 07:06:37 crc kubenswrapper[4889]: W1128 07:06:37.775293 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd5deb3d_df4a_48e4_844b_35247485825a.slice/crio-c21c7f926ac799226c679abd9245c34e020745016ac8880f28de5e36bdb149c6 WatchSource:0}: Error finding container c21c7f926ac799226c679abd9245c34e020745016ac8880f28de5e36bdb149c6: Status 404 returned error can't find the container with id c21c7f926ac799226c679abd9245c34e020745016ac8880f28de5e36bdb149c6 Nov 28 07:06:37 crc kubenswrapper[4889]: I1128 07:06:37.863239 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b7ccdcb4f-jzrl6"] Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.644896 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b4be180d-c2ba-47ad-964d-18e7b1c12b2b","Type":"ContainerStarted","Data":"01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.646476 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a92a932b-ef66-408c-883e-99412a94d0da","Type":"ContainerStarted","Data":"8b167955e43f6529720269cc5280735e5f9b8f62a031ef0fc8db7679214765f7"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.647553 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9b3de373-d67f-4cc7-ac6b-43b4b3f94242","Type":"ContainerStarted","Data":"710a20a8d6ed3e17de97850bc314e869f452085a8f28180ad7b708972e3860d5"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.647677 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.648750 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c960973-a307-4a8a-9fe6-885450c512e0","Type":"ContainerStarted","Data":"9c0bdc3b1d5da3bad6cec36b156ddd5f2770493a35c89a25fb3741002c171edc"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.649768 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" event={"ID":"cdb0efae-f6ec-4212-b26a-5185a1d09c2f","Type":"ContainerStarted","Data":"fa0044a84fcbad3c4537bc8f4a8dbae0ffe362bbb6a24e13c6ad1023fb7828a5"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.652715 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5276ecd4-549a-4a41-94be-6408535b2492","Type":"ContainerStarted","Data":"49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.652883 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.654245 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ecf7fcae-8493-4333-96c4-d4692a144187","Type":"ContainerStarted","Data":"107d9bd44e85df18c82b4aeccfad805cf3cde4845822859d34619bdc83c08a53"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.655951 4889 generic.go:334] "Generic (PLEG): container finished" podID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerID="2daa0208004ed61abb16fbdbc99bf42d79d5859769f420aea92707a8c6cfa182" exitCode=0 Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.655998 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2mhk" event={"ID":"d69857d8-b0ca-49bd-9d89-3ad02ec7adea","Type":"ContainerDied","Data":"2daa0208004ed61abb16fbdbc99bf42d79d5859769f420aea92707a8c6cfa182"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.658886 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" event={"ID":"a5fafd05-b706-4241-a945-78c9b14aa439","Type":"ContainerStarted","Data":"23bcac4e7ff15b2e594ce80a4f22aac1411e4b1154447275137b17f55a5f8098"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.661133 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xg58q" event={"ID":"fd5deb3d-df4a-48e4-844b-35247485825a","Type":"ContainerStarted","Data":"c21c7f926ac799226c679abd9245c34e020745016ac8880f28de5e36bdb149c6"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.662834 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dlfmr" event={"ID":"723ca26e-f925-47cc-92e3-998ff36f3e92","Type":"ContainerStarted","Data":"bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264"} Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.663008 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dlfmr" Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.702187 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.337462467 podStartE2EDuration="31.70216985s" podCreationTimestamp="2025-11-28 07:06:07 +0000 UTC" firstStartedPulling="2025-11-28 07:06:30.394224417 +0000 UTC m=+1113.364458572" lastFinishedPulling="2025-11-28 07:06:36.7589318 +0000 UTC m=+1119.729165955" observedRunningTime="2025-11-28 07:06:38.70215962 +0000 UTC m=+1121.672393775" watchObservedRunningTime="2025-11-28 07:06:38.70216985 +0000 UTC m=+1121.672404005" Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.726135 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.773735762 podStartE2EDuration="29.726112828s" podCreationTimestamp="2025-11-28 07:06:09 +0000 UTC" firstStartedPulling="2025-11-28 07:06:30.39257358 +0000 UTC m=+1113.362807735" lastFinishedPulling="2025-11-28 07:06:38.344950636 +0000 UTC m=+1121.315184801" observedRunningTime="2025-11-28 07:06:38.716966823 +0000 UTC m=+1121.687200978" watchObservedRunningTime="2025-11-28 07:06:38.726112828 +0000 UTC m=+1121.696346983" Nov 28 07:06:38 crc kubenswrapper[4889]: I1128 07:06:38.759930 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dlfmr" podStartSLOduration=16.943858626 podStartE2EDuration="23.759910738s" podCreationTimestamp="2025-11-28 07:06:15 +0000 UTC" firstStartedPulling="2025-11-28 07:06:30.396285534 +0000 UTC m=+1113.366519689" lastFinishedPulling="2025-11-28 07:06:37.212337646 +0000 UTC m=+1120.182571801" observedRunningTime="2025-11-28 07:06:38.755926179 +0000 UTC m=+1121.726160354" watchObservedRunningTime="2025-11-28 07:06:38.759910738 +0000 UTC m=+1121.730144893" Nov 28 07:06:39 crc kubenswrapper[4889]: I1128 07:06:39.344377 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ff0b9c-a6fd-410f-b862-1b373f720e90" path="/var/lib/kubelet/pods/75ff0b9c-a6fd-410f-b862-1b373f720e90/volumes" Nov 28 07:06:39 crc kubenswrapper[4889]: I1128 07:06:39.345200 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0cd5dd3-a98a-433d-bcfa-f7276759f987" path="/var/lib/kubelet/pods/f0cd5dd3-a98a-433d-bcfa-f7276759f987/volumes" Nov 28 07:06:39 crc kubenswrapper[4889]: I1128 07:06:39.672456 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2mhk" event={"ID":"d69857d8-b0ca-49bd-9d89-3ad02ec7adea","Type":"ContainerStarted","Data":"de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb"} Nov 28 07:06:39 crc kubenswrapper[4889]: I1128 07:06:39.672495 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2mhk" event={"ID":"d69857d8-b0ca-49bd-9d89-3ad02ec7adea","Type":"ContainerStarted","Data":"e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f"} Nov 28 07:06:39 crc kubenswrapper[4889]: I1128 07:06:39.694924 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-d2mhk" podStartSLOduration=18.455606383 podStartE2EDuration="24.694905465s" podCreationTimestamp="2025-11-28 07:06:15 +0000 UTC" firstStartedPulling="2025-11-28 07:06:30.808509574 +0000 UTC m=+1113.778743729" lastFinishedPulling="2025-11-28 07:06:37.047808646 +0000 UTC m=+1120.018042811" observedRunningTime="2025-11-28 07:06:39.689142566 +0000 UTC m=+1122.659376721" watchObservedRunningTime="2025-11-28 07:06:39.694905465 +0000 UTC m=+1122.665139610" Nov 28 07:06:40 crc kubenswrapper[4889]: I1128 07:06:40.577945 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:40 crc kubenswrapper[4889]: I1128 07:06:40.578159 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:06:41 crc kubenswrapper[4889]: I1128 07:06:41.687801 4889 generic.go:334] "Generic (PLEG): container finished" podID="a5fafd05-b706-4241-a945-78c9b14aa439" containerID="3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f" exitCode=0 Nov 28 07:06:41 crc kubenswrapper[4889]: I1128 07:06:41.687887 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" event={"ID":"a5fafd05-b706-4241-a945-78c9b14aa439","Type":"ContainerDied","Data":"3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f"} Nov 28 07:06:41 crc kubenswrapper[4889]: I1128 07:06:41.689567 4889 generic.go:334] "Generic (PLEG): container finished" podID="ecf7fcae-8493-4333-96c4-d4692a144187" containerID="107d9bd44e85df18c82b4aeccfad805cf3cde4845822859d34619bdc83c08a53" exitCode=0 Nov 28 07:06:41 crc kubenswrapper[4889]: I1128 07:06:41.689617 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ecf7fcae-8493-4333-96c4-d4692a144187","Type":"ContainerDied","Data":"107d9bd44e85df18c82b4aeccfad805cf3cde4845822859d34619bdc83c08a53"} Nov 28 07:06:41 crc kubenswrapper[4889]: I1128 07:06:41.692212 4889 generic.go:334] "Generic (PLEG): container finished" podID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" containerID="a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507" exitCode=0 Nov 28 07:06:41 crc kubenswrapper[4889]: I1128 07:06:41.692262 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" event={"ID":"cdb0efae-f6ec-4212-b26a-5185a1d09c2f","Type":"ContainerDied","Data":"a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507"} Nov 28 07:06:41 crc kubenswrapper[4889]: I1128 07:06:41.695265 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90d501b3-ad2c-4fb8-814d-411dc2a11f20","Type":"ContainerStarted","Data":"0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f"} Nov 28 07:06:42 crc kubenswrapper[4889]: I1128 07:06:42.704110 4889 generic.go:334] "Generic (PLEG): container finished" podID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" containerID="01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed" exitCode=0 Nov 28 07:06:42 crc kubenswrapper[4889]: I1128 07:06:42.704467 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b4be180d-c2ba-47ad-964d-18e7b1c12b2b","Type":"ContainerDied","Data":"01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed"} Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.715009 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" event={"ID":"cdb0efae-f6ec-4212-b26a-5185a1d09c2f","Type":"ContainerStarted","Data":"3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5"} Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.715353 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.717458 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ecf7fcae-8493-4333-96c4-d4692a144187","Type":"ContainerStarted","Data":"b28087a7afb2a256eeea56a89dcb8579fa36a7333356ade368829f77738b9428"} Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.720185 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c960973-a307-4a8a-9fe6-885450c512e0","Type":"ContainerStarted","Data":"5eb83b765e57ee122fbe625e86ad95bb06d3206e2ca82bde40a2997e84a961fb"} Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.722280 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b4be180d-c2ba-47ad-964d-18e7b1c12b2b","Type":"ContainerStarted","Data":"55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f"} Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.724518 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" event={"ID":"a5fafd05-b706-4241-a945-78c9b14aa439","Type":"ContainerStarted","Data":"0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d"} Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.724647 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.726535 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a92a932b-ef66-408c-883e-99412a94d0da","Type":"ContainerStarted","Data":"e38cd97bce0fc8d698d4e44b7375fde620f8a3ee986dc3c97e437a42647d9d7f"} Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.727935 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xg58q" event={"ID":"fd5deb3d-df4a-48e4-844b-35247485825a","Type":"ContainerStarted","Data":"3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662"} Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.746250 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" podStartSLOduration=4.9847724410000005 podStartE2EDuration="7.746203032s" podCreationTimestamp="2025-11-28 07:06:36 +0000 UTC" firstStartedPulling="2025-11-28 07:06:37.794361304 +0000 UTC m=+1120.764595459" lastFinishedPulling="2025-11-28 07:06:40.555791895 +0000 UTC m=+1123.526026050" observedRunningTime="2025-11-28 07:06:43.739646845 +0000 UTC m=+1126.709881010" watchObservedRunningTime="2025-11-28 07:06:43.746203032 +0000 UTC m=+1126.716437197" Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.759402 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xg58q" podStartSLOduration=3.520958675 podStartE2EDuration="8.759369499s" podCreationTimestamp="2025-11-28 07:06:35 +0000 UTC" firstStartedPulling="2025-11-28 07:06:37.777395713 +0000 UTC m=+1120.747629868" lastFinishedPulling="2025-11-28 07:06:43.015806537 +0000 UTC m=+1125.986040692" observedRunningTime="2025-11-28 07:06:43.754374166 +0000 UTC m=+1126.724608351" watchObservedRunningTime="2025-11-28 07:06:43.759369499 +0000 UTC m=+1126.729603694" Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.783587 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" podStartSLOduration=6.397872922 podStartE2EDuration="8.783568463s" podCreationTimestamp="2025-11-28 07:06:35 +0000 UTC" firstStartedPulling="2025-11-28 07:06:38.318343128 +0000 UTC m=+1121.288577283" lastFinishedPulling="2025-11-28 07:06:40.704038669 +0000 UTC m=+1123.674272824" observedRunningTime="2025-11-28 07:06:43.77723083 +0000 UTC m=+1126.747464995" watchObservedRunningTime="2025-11-28 07:06:43.783568463 +0000 UTC m=+1126.753802618" Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.809959 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=33.002391614 podStartE2EDuration="39.809937916s" podCreationTimestamp="2025-11-28 07:06:04 +0000 UTC" firstStartedPulling="2025-11-28 07:06:30.40458395 +0000 UTC m=+1113.374818105" lastFinishedPulling="2025-11-28 07:06:37.212130252 +0000 UTC m=+1120.182364407" observedRunningTime="2025-11-28 07:06:43.80480384 +0000 UTC m=+1126.775038035" watchObservedRunningTime="2025-11-28 07:06:43.809937916 +0000 UTC m=+1126.780172091" Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.835339 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.238685039 podStartE2EDuration="31.835290356s" podCreationTimestamp="2025-11-28 07:06:12 +0000 UTC" firstStartedPulling="2025-11-28 07:06:30.462763499 +0000 UTC m=+1113.432997654" lastFinishedPulling="2025-11-28 07:06:43.059368816 +0000 UTC m=+1126.029602971" observedRunningTime="2025-11-28 07:06:43.827865009 +0000 UTC m=+1126.798099194" watchObservedRunningTime="2025-11-28 07:06:43.835290356 +0000 UTC m=+1126.805524541" Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.860159 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.535485404 podStartE2EDuration="28.860132835s" podCreationTimestamp="2025-11-28 07:06:15 +0000 UTC" firstStartedPulling="2025-11-28 07:06:30.799995263 +0000 UTC m=+1113.770229418" lastFinishedPulling="2025-11-28 07:06:43.124642704 +0000 UTC m=+1126.094876849" observedRunningTime="2025-11-28 07:06:43.849663269 +0000 UTC m=+1126.819897454" watchObservedRunningTime="2025-11-28 07:06:43.860132835 +0000 UTC m=+1126.830367000" Nov 28 07:06:43 crc kubenswrapper[4889]: I1128 07:06:43.868967 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=31.159534279 podStartE2EDuration="37.868947623s" podCreationTimestamp="2025-11-28 07:06:06 +0000 UTC" firstStartedPulling="2025-11-28 07:06:30.049503076 +0000 UTC m=+1113.019737231" lastFinishedPulling="2025-11-28 07:06:36.75891642 +0000 UTC m=+1119.729150575" observedRunningTime="2025-11-28 07:06:43.867202774 +0000 UTC m=+1126.837436929" watchObservedRunningTime="2025-11-28 07:06:43.868947623 +0000 UTC m=+1126.839181778" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.238779 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.239166 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.487577 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.487815 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.522670 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.595060 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.630841 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.751623 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.787504 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 07:06:46 crc kubenswrapper[4889]: I1128 07:06:46.791975 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.162258 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.165062 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.190264 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.190296 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tgqd8" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.190581 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.191862 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.202408 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.280817 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.280877 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.280910 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-scripts\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.280933 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-config\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.280982 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.281005 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.281039 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2826t\" (UniqueName: \"kubernetes.io/projected/972b231d-adb2-4355-ae5b-57fc0cc642f4-kube-api-access-2826t\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.382884 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2826t\" (UniqueName: \"kubernetes.io/projected/972b231d-adb2-4355-ae5b-57fc0cc642f4-kube-api-access-2826t\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.383034 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.383072 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.383110 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-scripts\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.383149 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-config\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.383213 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.383250 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.384295 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-scripts\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.384914 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.385488 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-config\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.390004 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.391781 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.392328 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.411948 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2826t\" (UniqueName: \"kubernetes.io/projected/972b231d-adb2-4355-ae5b-57fc0cc642f4-kube-api-access-2826t\") pod \"ovn-northd-0\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.511688 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.875908 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.876216 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:47 crc kubenswrapper[4889]: I1128 07:06:47.951977 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 07:06:48 crc kubenswrapper[4889]: I1128 07:06:48.042003 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:06:48 crc kubenswrapper[4889]: W1128 07:06:48.043688 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod972b231d_adb2_4355_ae5b_57fc0cc642f4.slice/crio-b0e4d685247e21423d7f0f05034aa7485b9a0d9a040e053038a63b1640c19c1c WatchSource:0}: Error finding container b0e4d685247e21423d7f0f05034aa7485b9a0d9a040e053038a63b1640c19c1c: Status 404 returned error can't find the container with id b0e4d685247e21423d7f0f05034aa7485b9a0d9a040e053038a63b1640c19c1c Nov 28 07:06:48 crc kubenswrapper[4889]: I1128 07:06:48.480853 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 07:06:48 crc kubenswrapper[4889]: I1128 07:06:48.591098 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 07:06:48 crc kubenswrapper[4889]: I1128 07:06:48.773666 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"972b231d-adb2-4355-ae5b-57fc0cc642f4","Type":"ContainerStarted","Data":"b0e4d685247e21423d7f0f05034aa7485b9a0d9a040e053038a63b1640c19c1c"} Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.729495 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b7ccdcb4f-jzrl6"] Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.730688 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" podUID="a5fafd05-b706-4241-a945-78c9b14aa439" containerName="dnsmasq-dns" containerID="cri-o://0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d" gracePeriod=10 Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.734862 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.748915 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.764052 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f6d79597f-rtnp6"] Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.765540 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.821838 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f6d79597f-rtnp6"] Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.829219 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"972b231d-adb2-4355-ae5b-57fc0cc642f4","Type":"ContainerStarted","Data":"501a4b31916c81c75b98f9162dc9d571bda2ac1eeda86e0c705c757893b500ab"} Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.829276 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"972b231d-adb2-4355-ae5b-57fc0cc642f4","Type":"ContainerStarted","Data":"a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833"} Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.829916 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.833186 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-dns-svc\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.833490 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-config\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.833771 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.833895 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdrp\" (UniqueName: \"kubernetes.io/projected/d9beecee-59b4-475a-bcb2-9c37360314f8-kube-api-access-2rdrp\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.834031 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.868120 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.8173645440000001 podStartE2EDuration="2.868099364s" podCreationTimestamp="2025-11-28 07:06:47 +0000 UTC" firstStartedPulling="2025-11-28 07:06:48.04701846 +0000 UTC m=+1131.017252615" lastFinishedPulling="2025-11-28 07:06:49.09775328 +0000 UTC m=+1132.067987435" observedRunningTime="2025-11-28 07:06:49.856642066 +0000 UTC m=+1132.826876241" watchObservedRunningTime="2025-11-28 07:06:49.868099364 +0000 UTC m=+1132.838333519" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.936324 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.936371 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdrp\" (UniqueName: \"kubernetes.io/projected/d9beecee-59b4-475a-bcb2-9c37360314f8-kube-api-access-2rdrp\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.936392 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.936447 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-dns-svc\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.936520 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-config\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.938291 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.938432 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.939288 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-config\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.939540 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-dns-svc\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:49 crc kubenswrapper[4889]: I1128 07:06:49.957913 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdrp\" (UniqueName: \"kubernetes.io/projected/d9beecee-59b4-475a-bcb2-9c37360314f8-kube-api-access-2rdrp\") pod \"dnsmasq-dns-5f6d79597f-rtnp6\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.109424 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.124131 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.223379 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.240358 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.343329 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-dns-svc\") pod \"a5fafd05-b706-4241-a945-78c9b14aa439\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.343653 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqx8f\" (UniqueName: \"kubernetes.io/projected/a5fafd05-b706-4241-a945-78c9b14aa439-kube-api-access-rqx8f\") pod \"a5fafd05-b706-4241-a945-78c9b14aa439\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.343783 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-ovsdbserver-sb\") pod \"a5fafd05-b706-4241-a945-78c9b14aa439\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.343839 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-config\") pod \"a5fafd05-b706-4241-a945-78c9b14aa439\" (UID: \"a5fafd05-b706-4241-a945-78c9b14aa439\") " Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.394285 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fafd05-b706-4241-a945-78c9b14aa439-kube-api-access-rqx8f" (OuterVolumeSpecName: "kube-api-access-rqx8f") pod "a5fafd05-b706-4241-a945-78c9b14aa439" (UID: "a5fafd05-b706-4241-a945-78c9b14aa439"). InnerVolumeSpecName "kube-api-access-rqx8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.424481 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5fafd05-b706-4241-a945-78c9b14aa439" (UID: "a5fafd05-b706-4241-a945-78c9b14aa439"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.426728 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a5fafd05-b706-4241-a945-78c9b14aa439" (UID: "a5fafd05-b706-4241-a945-78c9b14aa439"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.427634 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-config" (OuterVolumeSpecName: "config") pod "a5fafd05-b706-4241-a945-78c9b14aa439" (UID: "a5fafd05-b706-4241-a945-78c9b14aa439"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.445903 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.445937 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.445950 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqx8f\" (UniqueName: \"kubernetes.io/projected/a5fafd05-b706-4241-a945-78c9b14aa439-kube-api-access-rqx8f\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.445962 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fafd05-b706-4241-a945-78c9b14aa439-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.838472 4889 generic.go:334] "Generic (PLEG): container finished" podID="a5fafd05-b706-4241-a945-78c9b14aa439" containerID="0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d" exitCode=0 Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.838553 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.838549 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" event={"ID":"a5fafd05-b706-4241-a945-78c9b14aa439","Type":"ContainerDied","Data":"0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d"} Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.838619 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7ccdcb4f-jzrl6" event={"ID":"a5fafd05-b706-4241-a945-78c9b14aa439","Type":"ContainerDied","Data":"23bcac4e7ff15b2e594ce80a4f22aac1411e4b1154447275137b17f55a5f8098"} Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.838644 4889 scope.go:117] "RemoveContainer" containerID="0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.838677 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f6d79597f-rtnp6"] Nov 28 07:06:50 crc kubenswrapper[4889]: W1128 07:06:50.844015 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9beecee_59b4_475a_bcb2_9c37360314f8.slice/crio-15cb8b995af2d305f9e721544f934c3354bb20d6c7f280cb8878fc5620802a83 WatchSource:0}: Error finding container 15cb8b995af2d305f9e721544f934c3354bb20d6c7f280cb8878fc5620802a83: Status 404 returned error can't find the container with id 15cb8b995af2d305f9e721544f934c3354bb20d6c7f280cb8878fc5620802a83 Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.868730 4889 scope.go:117] "RemoveContainer" containerID="3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.872217 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b7ccdcb4f-jzrl6"] Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.880926 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b7ccdcb4f-jzrl6"] Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.885305 4889 scope.go:117] "RemoveContainer" containerID="0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d" Nov 28 07:06:50 crc kubenswrapper[4889]: E1128 07:06:50.885940 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d\": container with ID starting with 0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d not found: ID does not exist" containerID="0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.885977 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d"} err="failed to get container status \"0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d\": rpc error: code = NotFound desc = could not find container \"0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d\": container with ID starting with 0fc03f3800094fff6cbb1d627267175d381c8546f28a56fbde91a525771bb77d not found: ID does not exist" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.886008 4889 scope.go:117] "RemoveContainer" containerID="3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f" Nov 28 07:06:50 crc kubenswrapper[4889]: E1128 07:06:50.886432 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f\": container with ID starting with 3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f not found: ID does not exist" containerID="3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.886453 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f"} err="failed to get container status \"3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f\": rpc error: code = NotFound desc = could not find container \"3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f\": container with ID starting with 3d86baf8e4b90ced7b02a76e693b5b50e7b6c5126e2be03d6fc3d90fc340416f not found: ID does not exist" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.944398 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:06:50 crc kubenswrapper[4889]: E1128 07:06:50.944699 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fafd05-b706-4241-a945-78c9b14aa439" containerName="init" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.944726 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fafd05-b706-4241-a945-78c9b14aa439" containerName="init" Nov 28 07:06:50 crc kubenswrapper[4889]: E1128 07:06:50.944749 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fafd05-b706-4241-a945-78c9b14aa439" containerName="dnsmasq-dns" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.944755 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fafd05-b706-4241-a945-78c9b14aa439" containerName="dnsmasq-dns" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.944927 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fafd05-b706-4241-a945-78c9b14aa439" containerName="dnsmasq-dns" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.955456 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.955793 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.958539 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.958766 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.959321 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 28 07:06:50 crc kubenswrapper[4889]: I1128 07:06:50.959489 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4lrjg" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.061242 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.061304 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.061334 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-cache\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.061633 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-lock\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.061739 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qvsr\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-kube-api-access-4qvsr\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.163333 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: E1128 07:06:51.163525 4889 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:06:51 crc kubenswrapper[4889]: E1128 07:06:51.163552 4889 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.163696 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: E1128 07:06:51.163772 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift podName:637e0576-2707-4c19-82d5-837d5e39578a nodeName:}" failed. No retries permitted until 2025-11-28 07:06:51.663732801 +0000 UTC m=+1134.633966946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift") pod "swift-storage-0" (UID: "637e0576-2707-4c19-82d5-837d5e39578a") : configmap "swift-ring-files" not found Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.163945 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-cache\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.164079 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.164192 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-lock\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.164296 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qvsr\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-kube-api-access-4qvsr\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.164846 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-lock\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.164973 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-cache\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.182536 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qvsr\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-kube-api-access-4qvsr\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.186095 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.344037 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fafd05-b706-4241-a945-78c9b14aa439" path="/var/lib/kubelet/pods/a5fafd05-b706-4241-a945-78c9b14aa439/volumes" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.474586 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-j6wjv"] Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.476037 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.478089 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.478116 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.479555 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.502827 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j6wjv"] Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.565873 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.571869 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-combined-ca-bundle\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.571920 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-swiftconf\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.571957 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8zj\" (UniqueName: \"kubernetes.io/projected/5c74af7d-0271-4b1d-8c93-88d33ca6329c-kube-api-access-5k8zj\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.572113 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-scripts\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.572202 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c74af7d-0271-4b1d-8c93-88d33ca6329c-etc-swift\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.572234 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-dispersionconf\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.572267 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-ring-data-devices\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.674650 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-scripts\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.674747 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c74af7d-0271-4b1d-8c93-88d33ca6329c-etc-swift\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.674767 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-dispersionconf\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.674788 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-ring-data-devices\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.674816 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.674999 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-combined-ca-bundle\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.675040 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-swiftconf\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.675082 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k8zj\" (UniqueName: \"kubernetes.io/projected/5c74af7d-0271-4b1d-8c93-88d33ca6329c-kube-api-access-5k8zj\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.675152 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c74af7d-0271-4b1d-8c93-88d33ca6329c-etc-swift\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: E1128 07:06:51.675405 4889 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:06:51 crc kubenswrapper[4889]: E1128 07:06:51.675426 4889 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:06:51 crc kubenswrapper[4889]: E1128 07:06:51.675468 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift podName:637e0576-2707-4c19-82d5-837d5e39578a nodeName:}" failed. No retries permitted until 2025-11-28 07:06:52.675453007 +0000 UTC m=+1135.645687152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift") pod "swift-storage-0" (UID: "637e0576-2707-4c19-82d5-837d5e39578a") : configmap "swift-ring-files" not found Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.676210 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-ring-data-devices\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.676286 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-scripts\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.678938 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-combined-ca-bundle\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.679390 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-swiftconf\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.682861 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-dispersionconf\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.695966 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k8zj\" (UniqueName: \"kubernetes.io/projected/5c74af7d-0271-4b1d-8c93-88d33ca6329c-kube-api-access-5k8zj\") pod \"swift-ring-rebalance-j6wjv\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.809598 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.863999 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" event={"ID":"d9beecee-59b4-475a-bcb2-9c37360314f8","Type":"ContainerDied","Data":"12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9"} Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.863959 4889 generic.go:334] "Generic (PLEG): container finished" podID="d9beecee-59b4-475a-bcb2-9c37360314f8" containerID="12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9" exitCode=0 Nov 28 07:06:51 crc kubenswrapper[4889]: I1128 07:06:51.865014 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" event={"ID":"d9beecee-59b4-475a-bcb2-9c37360314f8","Type":"ContainerStarted","Data":"15cb8b995af2d305f9e721544f934c3354bb20d6c7f280cb8878fc5620802a83"} Nov 28 07:06:52 crc kubenswrapper[4889]: I1128 07:06:52.297203 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j6wjv"] Nov 28 07:06:52 crc kubenswrapper[4889]: W1128 07:06:52.303092 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c74af7d_0271_4b1d_8c93_88d33ca6329c.slice/crio-1a3f2ee5d05d9f6a67d8fff412d36f4b7e992e205eb6552b8a5338e56b48415a WatchSource:0}: Error finding container 1a3f2ee5d05d9f6a67d8fff412d36f4b7e992e205eb6552b8a5338e56b48415a: Status 404 returned error can't find the container with id 1a3f2ee5d05d9f6a67d8fff412d36f4b7e992e205eb6552b8a5338e56b48415a Nov 28 07:06:52 crc kubenswrapper[4889]: I1128 07:06:52.698552 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:52 crc kubenswrapper[4889]: E1128 07:06:52.698751 4889 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:06:52 crc kubenswrapper[4889]: E1128 07:06:52.698780 4889 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:06:52 crc kubenswrapper[4889]: E1128 07:06:52.698846 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift podName:637e0576-2707-4c19-82d5-837d5e39578a nodeName:}" failed. No retries permitted until 2025-11-28 07:06:54.698825911 +0000 UTC m=+1137.669060066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift") pod "swift-storage-0" (UID: "637e0576-2707-4c19-82d5-837d5e39578a") : configmap "swift-ring-files" not found Nov 28 07:06:52 crc kubenswrapper[4889]: I1128 07:06:52.873373 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" event={"ID":"d9beecee-59b4-475a-bcb2-9c37360314f8","Type":"ContainerStarted","Data":"ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a"} Nov 28 07:06:52 crc kubenswrapper[4889]: I1128 07:06:52.873529 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:06:52 crc kubenswrapper[4889]: I1128 07:06:52.874537 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6wjv" event={"ID":"5c74af7d-0271-4b1d-8c93-88d33ca6329c","Type":"ContainerStarted","Data":"1a3f2ee5d05d9f6a67d8fff412d36f4b7e992e205eb6552b8a5338e56b48415a"} Nov 28 07:06:52 crc kubenswrapper[4889]: I1128 07:06:52.895367 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" podStartSLOduration=3.89534378 podStartE2EDuration="3.89534378s" podCreationTimestamp="2025-11-28 07:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:06:52.889409137 +0000 UTC m=+1135.859643302" watchObservedRunningTime="2025-11-28 07:06:52.89534378 +0000 UTC m=+1135.865577935" Nov 28 07:06:54 crc kubenswrapper[4889]: I1128 07:06:54.731621 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:54 crc kubenswrapper[4889]: E1128 07:06:54.732008 4889 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:06:54 crc kubenswrapper[4889]: E1128 07:06:54.732059 4889 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:06:54 crc kubenswrapper[4889]: E1128 07:06:54.732250 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift podName:637e0576-2707-4c19-82d5-837d5e39578a nodeName:}" failed. No retries permitted until 2025-11-28 07:06:58.732220099 +0000 UTC m=+1141.702454274 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift") pod "swift-storage-0" (UID: "637e0576-2707-4c19-82d5-837d5e39578a") : configmap "swift-ring-files" not found Nov 28 07:06:56 crc kubenswrapper[4889]: I1128 07:06:56.919152 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6wjv" event={"ID":"5c74af7d-0271-4b1d-8c93-88d33ca6329c","Type":"ContainerStarted","Data":"63d8e75a181bc24f5d07e30475fe1dd420fb0a22f5ad9e8334587097adfb8675"} Nov 28 07:06:56 crc kubenswrapper[4889]: I1128 07:06:56.938227 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-j6wjv" podStartSLOduration=2.004559126 podStartE2EDuration="5.938213047s" podCreationTimestamp="2025-11-28 07:06:51 +0000 UTC" firstStartedPulling="2025-11-28 07:06:52.305487535 +0000 UTC m=+1135.275721690" lastFinishedPulling="2025-11-28 07:06:56.239141456 +0000 UTC m=+1139.209375611" observedRunningTime="2025-11-28 07:06:56.933570033 +0000 UTC m=+1139.903804198" watchObservedRunningTime="2025-11-28 07:06:56.938213047 +0000 UTC m=+1139.908447202" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.449653 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-df7b-account-create-update-8ltbq"] Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.451698 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.454492 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.457554 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-df7b-account-create-update-8ltbq"] Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.504199 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5fjcb"] Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.505207 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5fjcb" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.511800 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5fjcb"] Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.582238 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctqk\" (UniqueName: \"kubernetes.io/projected/0c0713e6-6a1f-45ee-9929-4ab652d46e06-kube-api-access-vctqk\") pod \"keystone-db-create-5fjcb\" (UID: \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\") " pod="openstack/keystone-db-create-5fjcb" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.582392 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ld2b\" (UniqueName: \"kubernetes.io/projected/b5e1fb75-c9d4-40d4-97fa-41162ea57360-kube-api-access-2ld2b\") pod \"keystone-df7b-account-create-update-8ltbq\" (UID: \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\") " pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.582457 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1fb75-c9d4-40d4-97fa-41162ea57360-operator-scripts\") pod \"keystone-df7b-account-create-update-8ltbq\" (UID: \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\") " pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.582511 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0713e6-6a1f-45ee-9929-4ab652d46e06-operator-scripts\") pod \"keystone-db-create-5fjcb\" (UID: \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\") " pod="openstack/keystone-db-create-5fjcb" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.684386 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ld2b\" (UniqueName: \"kubernetes.io/projected/b5e1fb75-c9d4-40d4-97fa-41162ea57360-kube-api-access-2ld2b\") pod \"keystone-df7b-account-create-update-8ltbq\" (UID: \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\") " pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.684479 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1fb75-c9d4-40d4-97fa-41162ea57360-operator-scripts\") pod \"keystone-df7b-account-create-update-8ltbq\" (UID: \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\") " pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.684546 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0713e6-6a1f-45ee-9929-4ab652d46e06-operator-scripts\") pod \"keystone-db-create-5fjcb\" (UID: \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\") " pod="openstack/keystone-db-create-5fjcb" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.685579 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vctqk\" (UniqueName: \"kubernetes.io/projected/0c0713e6-6a1f-45ee-9929-4ab652d46e06-kube-api-access-vctqk\") pod \"keystone-db-create-5fjcb\" (UID: \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\") " pod="openstack/keystone-db-create-5fjcb" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.685478 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0713e6-6a1f-45ee-9929-4ab652d46e06-operator-scripts\") pod \"keystone-db-create-5fjcb\" (UID: \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\") " pod="openstack/keystone-db-create-5fjcb" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.685343 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1fb75-c9d4-40d4-97fa-41162ea57360-operator-scripts\") pod \"keystone-df7b-account-create-update-8ltbq\" (UID: \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\") " pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.710196 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ld2b\" (UniqueName: \"kubernetes.io/projected/b5e1fb75-c9d4-40d4-97fa-41162ea57360-kube-api-access-2ld2b\") pod \"keystone-df7b-account-create-update-8ltbq\" (UID: \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\") " pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.710422 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vctqk\" (UniqueName: \"kubernetes.io/projected/0c0713e6-6a1f-45ee-9929-4ab652d46e06-kube-api-access-vctqk\") pod \"keystone-db-create-5fjcb\" (UID: \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\") " pod="openstack/keystone-db-create-5fjcb" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.712670 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nstbj"] Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.714083 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nstbj" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.725744 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nstbj"] Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.780906 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.786635 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6v5p\" (UniqueName: \"kubernetes.io/projected/1382a8ac-8448-45ed-8bd1-74426b1aa746-kube-api-access-x6v5p\") pod \"placement-db-create-nstbj\" (UID: \"1382a8ac-8448-45ed-8bd1-74426b1aa746\") " pod="openstack/placement-db-create-nstbj" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.786806 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1382a8ac-8448-45ed-8bd1-74426b1aa746-operator-scripts\") pod \"placement-db-create-nstbj\" (UID: \"1382a8ac-8448-45ed-8bd1-74426b1aa746\") " pod="openstack/placement-db-create-nstbj" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.828175 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5fjcb" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.875389 4889 scope.go:117] "RemoveContainer" containerID="d5b766e85d69f9973da3af666702b17a17a302e70a61adab37811dd79ec37df4" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.875559 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-75e4-account-create-update-bkcxt"] Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.877287 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.880546 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.892244 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6v5p\" (UniqueName: \"kubernetes.io/projected/1382a8ac-8448-45ed-8bd1-74426b1aa746-kube-api-access-x6v5p\") pod \"placement-db-create-nstbj\" (UID: \"1382a8ac-8448-45ed-8bd1-74426b1aa746\") " pod="openstack/placement-db-create-nstbj" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.892419 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1382a8ac-8448-45ed-8bd1-74426b1aa746-operator-scripts\") pod \"placement-db-create-nstbj\" (UID: \"1382a8ac-8448-45ed-8bd1-74426b1aa746\") " pod="openstack/placement-db-create-nstbj" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.893779 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1382a8ac-8448-45ed-8bd1-74426b1aa746-operator-scripts\") pod \"placement-db-create-nstbj\" (UID: \"1382a8ac-8448-45ed-8bd1-74426b1aa746\") " pod="openstack/placement-db-create-nstbj" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.895625 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75e4-account-create-update-bkcxt"] Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.916431 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6v5p\" (UniqueName: \"kubernetes.io/projected/1382a8ac-8448-45ed-8bd1-74426b1aa746-kube-api-access-x6v5p\") pod \"placement-db-create-nstbj\" (UID: \"1382a8ac-8448-45ed-8bd1-74426b1aa746\") " pod="openstack/placement-db-create-nstbj" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.982959 4889 scope.go:117] "RemoveContainer" containerID="b01ed69ebad4efeb4cc6345d1f5768a80aa5db2c8f3b4cec229c2f57bfb179a0" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.993663 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ff3fdda-c81d-4b71-967a-d482454d5e3e-operator-scripts\") pod \"placement-75e4-account-create-update-bkcxt\" (UID: \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\") " pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:06:57 crc kubenswrapper[4889]: I1128 07:06:57.994490 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csnsg\" (UniqueName: \"kubernetes.io/projected/3ff3fdda-c81d-4b71-967a-d482454d5e3e-kube-api-access-csnsg\") pod \"placement-75e4-account-create-update-bkcxt\" (UID: \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\") " pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.014507 4889 scope.go:117] "RemoveContainer" containerID="40520db339807d902dc7a03b679d43b6136b6bbd07b05aa92f8953549fc8f2b0" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.073317 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7grgp"] Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.074898 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7grgp" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.086985 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nstbj" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.090067 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7grgp"] Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.095896 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ff3fdda-c81d-4b71-967a-d482454d5e3e-operator-scripts\") pod \"placement-75e4-account-create-update-bkcxt\" (UID: \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\") " pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.096032 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csnsg\" (UniqueName: \"kubernetes.io/projected/3ff3fdda-c81d-4b71-967a-d482454d5e3e-kube-api-access-csnsg\") pod \"placement-75e4-account-create-update-bkcxt\" (UID: \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\") " pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.096740 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ff3fdda-c81d-4b71-967a-d482454d5e3e-operator-scripts\") pod \"placement-75e4-account-create-update-bkcxt\" (UID: \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\") " pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.113399 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csnsg\" (UniqueName: \"kubernetes.io/projected/3ff3fdda-c81d-4b71-967a-d482454d5e3e-kube-api-access-csnsg\") pod \"placement-75e4-account-create-update-bkcxt\" (UID: \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\") " pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.185531 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f2a1-account-create-update-lhfcx"] Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.187006 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.189393 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.196505 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f2a1-account-create-update-lhfcx"] Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.197180 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2nm9\" (UniqueName: \"kubernetes.io/projected/53b94446-0a24-4eaa-ab88-62168ad8c7b7-kube-api-access-x2nm9\") pod \"glance-db-create-7grgp\" (UID: \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\") " pod="openstack/glance-db-create-7grgp" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.197227 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b94446-0a24-4eaa-ab88-62168ad8c7b7-operator-scripts\") pod \"glance-db-create-7grgp\" (UID: \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\") " pod="openstack/glance-db-create-7grgp" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.229868 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.280722 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-df7b-account-create-update-8ltbq"] Nov 28 07:06:58 crc kubenswrapper[4889]: W1128 07:06:58.292023 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5e1fb75_c9d4_40d4_97fa_41162ea57360.slice/crio-7949847cf011339042752b019b37409406fda238eb534a1e87744bf80ec9cb07 WatchSource:0}: Error finding container 7949847cf011339042752b019b37409406fda238eb534a1e87744bf80ec9cb07: Status 404 returned error can't find the container with id 7949847cf011339042752b019b37409406fda238eb534a1e87744bf80ec9cb07 Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.300200 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2vm\" (UniqueName: \"kubernetes.io/projected/66cdccd7-317a-47fb-a7e1-06ac1924af9c-kube-api-access-xx2vm\") pod \"glance-f2a1-account-create-update-lhfcx\" (UID: \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\") " pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.300325 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2nm9\" (UniqueName: \"kubernetes.io/projected/53b94446-0a24-4eaa-ab88-62168ad8c7b7-kube-api-access-x2nm9\") pod \"glance-db-create-7grgp\" (UID: \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\") " pod="openstack/glance-db-create-7grgp" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.300351 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b94446-0a24-4eaa-ab88-62168ad8c7b7-operator-scripts\") pod \"glance-db-create-7grgp\" (UID: \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\") " pod="openstack/glance-db-create-7grgp" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.300446 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cdccd7-317a-47fb-a7e1-06ac1924af9c-operator-scripts\") pod \"glance-f2a1-account-create-update-lhfcx\" (UID: \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\") " pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.301165 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b94446-0a24-4eaa-ab88-62168ad8c7b7-operator-scripts\") pod \"glance-db-create-7grgp\" (UID: \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\") " pod="openstack/glance-db-create-7grgp" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.320136 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2nm9\" (UniqueName: \"kubernetes.io/projected/53b94446-0a24-4eaa-ab88-62168ad8c7b7-kube-api-access-x2nm9\") pod \"glance-db-create-7grgp\" (UID: \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\") " pod="openstack/glance-db-create-7grgp" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.375745 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5fjcb"] Nov 28 07:06:58 crc kubenswrapper[4889]: W1128 07:06:58.395864 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c0713e6_6a1f_45ee_9929_4ab652d46e06.slice/crio-b1599fd05e798a4a9f0937168eab32ba357d369b4d073b3be73713a7a25d497f WatchSource:0}: Error finding container b1599fd05e798a4a9f0937168eab32ba357d369b4d073b3be73713a7a25d497f: Status 404 returned error can't find the container with id b1599fd05e798a4a9f0937168eab32ba357d369b4d073b3be73713a7a25d497f Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.396044 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7grgp" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.402301 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx2vm\" (UniqueName: \"kubernetes.io/projected/66cdccd7-317a-47fb-a7e1-06ac1924af9c-kube-api-access-xx2vm\") pod \"glance-f2a1-account-create-update-lhfcx\" (UID: \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\") " pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.402465 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cdccd7-317a-47fb-a7e1-06ac1924af9c-operator-scripts\") pod \"glance-f2a1-account-create-update-lhfcx\" (UID: \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\") " pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.404217 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cdccd7-317a-47fb-a7e1-06ac1924af9c-operator-scripts\") pod \"glance-f2a1-account-create-update-lhfcx\" (UID: \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\") " pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.423485 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx2vm\" (UniqueName: \"kubernetes.io/projected/66cdccd7-317a-47fb-a7e1-06ac1924af9c-kube-api-access-xx2vm\") pod \"glance-f2a1-account-create-update-lhfcx\" (UID: \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\") " pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.504176 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.562634 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nstbj"] Nov 28 07:06:58 crc kubenswrapper[4889]: W1128 07:06:58.572955 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1382a8ac_8448_45ed_8bd1_74426b1aa746.slice/crio-1d690cc2e31b53314d9dfd2b955b5f0c79cd79661745eebe1152cd1382317a99 WatchSource:0}: Error finding container 1d690cc2e31b53314d9dfd2b955b5f0c79cd79661745eebe1152cd1382317a99: Status 404 returned error can't find the container with id 1d690cc2e31b53314d9dfd2b955b5f0c79cd79661745eebe1152cd1382317a99 Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.689165 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75e4-account-create-update-bkcxt"] Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.810736 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:06:58 crc kubenswrapper[4889]: E1128 07:06:58.810993 4889 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:06:58 crc kubenswrapper[4889]: E1128 07:06:58.811013 4889 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:06:58 crc kubenswrapper[4889]: E1128 07:06:58.811057 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift podName:637e0576-2707-4c19-82d5-837d5e39578a nodeName:}" failed. No retries permitted until 2025-11-28 07:07:06.811044123 +0000 UTC m=+1149.781278278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift") pod "swift-storage-0" (UID: "637e0576-2707-4c19-82d5-837d5e39578a") : configmap "swift-ring-files" not found Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.836064 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7grgp"] Nov 28 07:06:58 crc kubenswrapper[4889]: W1128 07:06:58.838295 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53b94446_0a24_4eaa_ab88_62168ad8c7b7.slice/crio-229a38604bc8bb4d49de1e21a25c5a11be81d1dbf55782cff3df549decef4b3c WatchSource:0}: Error finding container 229a38604bc8bb4d49de1e21a25c5a11be81d1dbf55782cff3df549decef4b3c: Status 404 returned error can't find the container with id 229a38604bc8bb4d49de1e21a25c5a11be81d1dbf55782cff3df549decef4b3c Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.940196 4889 generic.go:334] "Generic (PLEG): container finished" podID="0c0713e6-6a1f-45ee-9929-4ab652d46e06" containerID="94cc8d2bc182aab5529032c05f67b4f964516d5c3e5df53fa2edf3225847773b" exitCode=0 Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.940300 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5fjcb" event={"ID":"0c0713e6-6a1f-45ee-9929-4ab652d46e06","Type":"ContainerDied","Data":"94cc8d2bc182aab5529032c05f67b4f964516d5c3e5df53fa2edf3225847773b"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.940328 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5fjcb" event={"ID":"0c0713e6-6a1f-45ee-9929-4ab652d46e06","Type":"ContainerStarted","Data":"b1599fd05e798a4a9f0937168eab32ba357d369b4d073b3be73713a7a25d497f"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.942553 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75e4-account-create-update-bkcxt" event={"ID":"3ff3fdda-c81d-4b71-967a-d482454d5e3e","Type":"ContainerStarted","Data":"da3ee6917396050d8ec1b497f4f1a88e07614540f1e4e0679fed1d3c1acad3ab"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.942680 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75e4-account-create-update-bkcxt" event={"ID":"3ff3fdda-c81d-4b71-967a-d482454d5e3e","Type":"ContainerStarted","Data":"7529ef0a3de2b3cfc854c9168f5f9601912763c40ecc245e8b780fcf7534480c"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.944001 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7grgp" event={"ID":"53b94446-0a24-4eaa-ab88-62168ad8c7b7","Type":"ContainerStarted","Data":"229a38604bc8bb4d49de1e21a25c5a11be81d1dbf55782cff3df549decef4b3c"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.945567 4889 generic.go:334] "Generic (PLEG): container finished" podID="b5e1fb75-c9d4-40d4-97fa-41162ea57360" containerID="3a59821bc02accc1c7655cfaca4f77c3faf9bf8e1339cbfe6dfdf26f4903500b" exitCode=0 Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.945622 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-df7b-account-create-update-8ltbq" event={"ID":"b5e1fb75-c9d4-40d4-97fa-41162ea57360","Type":"ContainerDied","Data":"3a59821bc02accc1c7655cfaca4f77c3faf9bf8e1339cbfe6dfdf26f4903500b"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.945641 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-df7b-account-create-update-8ltbq" event={"ID":"b5e1fb75-c9d4-40d4-97fa-41162ea57360","Type":"ContainerStarted","Data":"7949847cf011339042752b019b37409406fda238eb534a1e87744bf80ec9cb07"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.947166 4889 generic.go:334] "Generic (PLEG): container finished" podID="1382a8ac-8448-45ed-8bd1-74426b1aa746" containerID="b7ac8ffea194ee1a9346ca07c093bf39841b5ab94b2cd10b7674049ead231395" exitCode=0 Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.947200 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nstbj" event={"ID":"1382a8ac-8448-45ed-8bd1-74426b1aa746","Type":"ContainerDied","Data":"b7ac8ffea194ee1a9346ca07c093bf39841b5ab94b2cd10b7674049ead231395"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.947217 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nstbj" event={"ID":"1382a8ac-8448-45ed-8bd1-74426b1aa746","Type":"ContainerStarted","Data":"1d690cc2e31b53314d9dfd2b955b5f0c79cd79661745eebe1152cd1382317a99"} Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.975538 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f2a1-account-create-update-lhfcx"] Nov 28 07:06:58 crc kubenswrapper[4889]: I1128 07:06:58.976806 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-75e4-account-create-update-bkcxt" podStartSLOduration=1.976789441 podStartE2EDuration="1.976789441s" podCreationTimestamp="2025-11-28 07:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:06:58.971996453 +0000 UTC m=+1141.942230608" watchObservedRunningTime="2025-11-28 07:06:58.976789441 +0000 UTC m=+1141.947023596" Nov 28 07:06:58 crc kubenswrapper[4889]: W1128 07:06:58.983131 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66cdccd7_317a_47fb_a7e1_06ac1924af9c.slice/crio-a9a2b7b040e7683440dcef5db485d7d4cd5ae1be0253b822122e6e0dea04e460 WatchSource:0}: Error finding container a9a2b7b040e7683440dcef5db485d7d4cd5ae1be0253b822122e6e0dea04e460: Status 404 returned error can't find the container with id a9a2b7b040e7683440dcef5db485d7d4cd5ae1be0253b822122e6e0dea04e460 Nov 28 07:06:59 crc kubenswrapper[4889]: I1128 07:06:59.961319 4889 generic.go:334] "Generic (PLEG): container finished" podID="53b94446-0a24-4eaa-ab88-62168ad8c7b7" containerID="c1ba8c5c6fed2b237e0d295eeb769adf091e1a2ffd35bf71a08857b2b11f23fd" exitCode=0 Nov 28 07:06:59 crc kubenswrapper[4889]: I1128 07:06:59.961429 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7grgp" event={"ID":"53b94446-0a24-4eaa-ab88-62168ad8c7b7","Type":"ContainerDied","Data":"c1ba8c5c6fed2b237e0d295eeb769adf091e1a2ffd35bf71a08857b2b11f23fd"} Nov 28 07:06:59 crc kubenswrapper[4889]: I1128 07:06:59.963764 4889 generic.go:334] "Generic (PLEG): container finished" podID="66cdccd7-317a-47fb-a7e1-06ac1924af9c" containerID="200e7722bf6ef29e833362691d3b0097abcbb8024dfe1437982547abdecb41e5" exitCode=0 Nov 28 07:06:59 crc kubenswrapper[4889]: I1128 07:06:59.963904 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f2a1-account-create-update-lhfcx" event={"ID":"66cdccd7-317a-47fb-a7e1-06ac1924af9c","Type":"ContainerDied","Data":"200e7722bf6ef29e833362691d3b0097abcbb8024dfe1437982547abdecb41e5"} Nov 28 07:06:59 crc kubenswrapper[4889]: I1128 07:06:59.963964 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f2a1-account-create-update-lhfcx" event={"ID":"66cdccd7-317a-47fb-a7e1-06ac1924af9c","Type":"ContainerStarted","Data":"a9a2b7b040e7683440dcef5db485d7d4cd5ae1be0253b822122e6e0dea04e460"} Nov 28 07:06:59 crc kubenswrapper[4889]: I1128 07:06:59.966994 4889 generic.go:334] "Generic (PLEG): container finished" podID="3ff3fdda-c81d-4b71-967a-d482454d5e3e" containerID="da3ee6917396050d8ec1b497f4f1a88e07614540f1e4e0679fed1d3c1acad3ab" exitCode=0 Nov 28 07:06:59 crc kubenswrapper[4889]: I1128 07:06:59.967115 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75e4-account-create-update-bkcxt" event={"ID":"3ff3fdda-c81d-4b71-967a-d482454d5e3e","Type":"ContainerDied","Data":"da3ee6917396050d8ec1b497f4f1a88e07614540f1e4e0679fed1d3c1acad3ab"} Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.112461 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.165982 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bd7c66845-ljrtx"] Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.166205 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" podUID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" containerName="dnsmasq-dns" containerID="cri-o://3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5" gracePeriod=10 Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.397618 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5fjcb" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.447040 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0713e6-6a1f-45ee-9929-4ab652d46e06-operator-scripts\") pod \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\" (UID: \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.447159 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vctqk\" (UniqueName: \"kubernetes.io/projected/0c0713e6-6a1f-45ee-9929-4ab652d46e06-kube-api-access-vctqk\") pod \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\" (UID: \"0c0713e6-6a1f-45ee-9929-4ab652d46e06\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.447634 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c0713e6-6a1f-45ee-9929-4ab652d46e06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c0713e6-6a1f-45ee-9929-4ab652d46e06" (UID: "0c0713e6-6a1f-45ee-9929-4ab652d46e06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.447738 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0713e6-6a1f-45ee-9929-4ab652d46e06-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.455558 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0713e6-6a1f-45ee-9929-4ab652d46e06-kube-api-access-vctqk" (OuterVolumeSpecName: "kube-api-access-vctqk") pod "0c0713e6-6a1f-45ee-9929-4ab652d46e06" (UID: "0c0713e6-6a1f-45ee-9929-4ab652d46e06"). InnerVolumeSpecName "kube-api-access-vctqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.538205 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.540667 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nstbj" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.549780 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vctqk\" (UniqueName: \"kubernetes.io/projected/0c0713e6-6a1f-45ee-9929-4ab652d46e06-kube-api-access-vctqk\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.650924 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6v5p\" (UniqueName: \"kubernetes.io/projected/1382a8ac-8448-45ed-8bd1-74426b1aa746-kube-api-access-x6v5p\") pod \"1382a8ac-8448-45ed-8bd1-74426b1aa746\" (UID: \"1382a8ac-8448-45ed-8bd1-74426b1aa746\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.651294 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1fb75-c9d4-40d4-97fa-41162ea57360-operator-scripts\") pod \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\" (UID: \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.651325 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ld2b\" (UniqueName: \"kubernetes.io/projected/b5e1fb75-c9d4-40d4-97fa-41162ea57360-kube-api-access-2ld2b\") pod \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\" (UID: \"b5e1fb75-c9d4-40d4-97fa-41162ea57360\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.651416 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1382a8ac-8448-45ed-8bd1-74426b1aa746-operator-scripts\") pod \"1382a8ac-8448-45ed-8bd1-74426b1aa746\" (UID: \"1382a8ac-8448-45ed-8bd1-74426b1aa746\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.652258 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e1fb75-c9d4-40d4-97fa-41162ea57360-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5e1fb75-c9d4-40d4-97fa-41162ea57360" (UID: "b5e1fb75-c9d4-40d4-97fa-41162ea57360"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.652351 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1382a8ac-8448-45ed-8bd1-74426b1aa746-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1382a8ac-8448-45ed-8bd1-74426b1aa746" (UID: "1382a8ac-8448-45ed-8bd1-74426b1aa746"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.656758 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1382a8ac-8448-45ed-8bd1-74426b1aa746-kube-api-access-x6v5p" (OuterVolumeSpecName: "kube-api-access-x6v5p") pod "1382a8ac-8448-45ed-8bd1-74426b1aa746" (UID: "1382a8ac-8448-45ed-8bd1-74426b1aa746"). InnerVolumeSpecName "kube-api-access-x6v5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.656838 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e1fb75-c9d4-40d4-97fa-41162ea57360-kube-api-access-2ld2b" (OuterVolumeSpecName: "kube-api-access-2ld2b") pod "b5e1fb75-c9d4-40d4-97fa-41162ea57360" (UID: "b5e1fb75-c9d4-40d4-97fa-41162ea57360"). InnerVolumeSpecName "kube-api-access-2ld2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.666964 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.754040 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-nb\") pod \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.754209 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-config\") pod \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.754278 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbwgf\" (UniqueName: \"kubernetes.io/projected/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-kube-api-access-vbwgf\") pod \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.754528 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-dns-svc\") pod \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.754596 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-sb\") pod \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\" (UID: \"cdb0efae-f6ec-4212-b26a-5185a1d09c2f\") " Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.755173 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1382a8ac-8448-45ed-8bd1-74426b1aa746-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.755291 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6v5p\" (UniqueName: \"kubernetes.io/projected/1382a8ac-8448-45ed-8bd1-74426b1aa746-kube-api-access-x6v5p\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.755312 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1fb75-c9d4-40d4-97fa-41162ea57360-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.755326 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ld2b\" (UniqueName: \"kubernetes.io/projected/b5e1fb75-c9d4-40d4-97fa-41162ea57360-kube-api-access-2ld2b\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.774972 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-kube-api-access-vbwgf" (OuterVolumeSpecName: "kube-api-access-vbwgf") pod "cdb0efae-f6ec-4212-b26a-5185a1d09c2f" (UID: "cdb0efae-f6ec-4212-b26a-5185a1d09c2f"). InnerVolumeSpecName "kube-api-access-vbwgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.792690 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdb0efae-f6ec-4212-b26a-5185a1d09c2f" (UID: "cdb0efae-f6ec-4212-b26a-5185a1d09c2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.810008 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cdb0efae-f6ec-4212-b26a-5185a1d09c2f" (UID: "cdb0efae-f6ec-4212-b26a-5185a1d09c2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.815282 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-config" (OuterVolumeSpecName: "config") pod "cdb0efae-f6ec-4212-b26a-5185a1d09c2f" (UID: "cdb0efae-f6ec-4212-b26a-5185a1d09c2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.822355 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdb0efae-f6ec-4212-b26a-5185a1d09c2f" (UID: "cdb0efae-f6ec-4212-b26a-5185a1d09c2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.856935 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.856967 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbwgf\" (UniqueName: \"kubernetes.io/projected/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-kube-api-access-vbwgf\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.856981 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.856990 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.856999 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdb0efae-f6ec-4212-b26a-5185a1d09c2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.976173 4889 generic.go:334] "Generic (PLEG): container finished" podID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" containerID="3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5" exitCode=0 Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.976225 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" event={"ID":"cdb0efae-f6ec-4212-b26a-5185a1d09c2f","Type":"ContainerDied","Data":"3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5"} Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.976283 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" event={"ID":"cdb0efae-f6ec-4212-b26a-5185a1d09c2f","Type":"ContainerDied","Data":"fa0044a84fcbad3c4537bc8f4a8dbae0ffe362bbb6a24e13c6ad1023fb7828a5"} Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.976308 4889 scope.go:117] "RemoveContainer" containerID="3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.976303 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd7c66845-ljrtx" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.982137 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5fjcb" event={"ID":"0c0713e6-6a1f-45ee-9929-4ab652d46e06","Type":"ContainerDied","Data":"b1599fd05e798a4a9f0937168eab32ba357d369b4d073b3be73713a7a25d497f"} Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.982170 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1599fd05e798a4a9f0937168eab32ba357d369b4d073b3be73713a7a25d497f" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.982213 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5fjcb" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.990137 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nstbj" event={"ID":"1382a8ac-8448-45ed-8bd1-74426b1aa746","Type":"ContainerDied","Data":"1d690cc2e31b53314d9dfd2b955b5f0c79cd79661745eebe1152cd1382317a99"} Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.990183 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d690cc2e31b53314d9dfd2b955b5f0c79cd79661745eebe1152cd1382317a99" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.990243 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nstbj" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.994481 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-df7b-account-create-update-8ltbq" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.994564 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-df7b-account-create-update-8ltbq" event={"ID":"b5e1fb75-c9d4-40d4-97fa-41162ea57360","Type":"ContainerDied","Data":"7949847cf011339042752b019b37409406fda238eb534a1e87744bf80ec9cb07"} Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.994612 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7949847cf011339042752b019b37409406fda238eb534a1e87744bf80ec9cb07" Nov 28 07:07:00 crc kubenswrapper[4889]: I1128 07:07:00.996872 4889 scope.go:117] "RemoveContainer" containerID="a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.013689 4889 scope.go:117] "RemoveContainer" containerID="3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5" Nov 28 07:07:01 crc kubenswrapper[4889]: E1128 07:07:01.014104 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5\": container with ID starting with 3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5 not found: ID does not exist" containerID="3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.014138 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5"} err="failed to get container status \"3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5\": rpc error: code = NotFound desc = could not find container \"3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5\": container with ID starting with 3c9b69fdf7d823c6f43e7f6f39bd0116f2a8cff485adf1e44f6d683198d985f5 not found: ID does not exist" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.014158 4889 scope.go:117] "RemoveContainer" containerID="a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507" Nov 28 07:07:01 crc kubenswrapper[4889]: E1128 07:07:01.014388 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507\": container with ID starting with a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507 not found: ID does not exist" containerID="a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.014427 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507"} err="failed to get container status \"a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507\": rpc error: code = NotFound desc = could not find container \"a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507\": container with ID starting with a289dae30cf871b92717c82b010a2857f9394b0a5a1f80c18d232f1826fd4507 not found: ID does not exist" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.018828 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bd7c66845-ljrtx"] Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.026806 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bd7c66845-ljrtx"] Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.346884 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" path="/var/lib/kubelet/pods/cdb0efae-f6ec-4212-b26a-5185a1d09c2f/volumes" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.365254 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7grgp" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.383077 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.414418 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.469424 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cdccd7-317a-47fb-a7e1-06ac1924af9c-operator-scripts\") pod \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\" (UID: \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\") " Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.469472 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx2vm\" (UniqueName: \"kubernetes.io/projected/66cdccd7-317a-47fb-a7e1-06ac1924af9c-kube-api-access-xx2vm\") pod \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\" (UID: \"66cdccd7-317a-47fb-a7e1-06ac1924af9c\") " Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.469501 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ff3fdda-c81d-4b71-967a-d482454d5e3e-operator-scripts\") pod \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\" (UID: \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\") " Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.469548 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csnsg\" (UniqueName: \"kubernetes.io/projected/3ff3fdda-c81d-4b71-967a-d482454d5e3e-kube-api-access-csnsg\") pod \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\" (UID: \"3ff3fdda-c81d-4b71-967a-d482454d5e3e\") " Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.469636 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b94446-0a24-4eaa-ab88-62168ad8c7b7-operator-scripts\") pod \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\" (UID: \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\") " Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.469697 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2nm9\" (UniqueName: \"kubernetes.io/projected/53b94446-0a24-4eaa-ab88-62168ad8c7b7-kube-api-access-x2nm9\") pod \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\" (UID: \"53b94446-0a24-4eaa-ab88-62168ad8c7b7\") " Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.470666 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53b94446-0a24-4eaa-ab88-62168ad8c7b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53b94446-0a24-4eaa-ab88-62168ad8c7b7" (UID: "53b94446-0a24-4eaa-ab88-62168ad8c7b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.471031 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66cdccd7-317a-47fb-a7e1-06ac1924af9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66cdccd7-317a-47fb-a7e1-06ac1924af9c" (UID: "66cdccd7-317a-47fb-a7e1-06ac1924af9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.471208 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff3fdda-c81d-4b71-967a-d482454d5e3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ff3fdda-c81d-4b71-967a-d482454d5e3e" (UID: "3ff3fdda-c81d-4b71-967a-d482454d5e3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.477058 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b94446-0a24-4eaa-ab88-62168ad8c7b7-kube-api-access-x2nm9" (OuterVolumeSpecName: "kube-api-access-x2nm9") pod "53b94446-0a24-4eaa-ab88-62168ad8c7b7" (UID: "53b94446-0a24-4eaa-ab88-62168ad8c7b7"). InnerVolumeSpecName "kube-api-access-x2nm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.477093 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66cdccd7-317a-47fb-a7e1-06ac1924af9c-kube-api-access-xx2vm" (OuterVolumeSpecName: "kube-api-access-xx2vm") pod "66cdccd7-317a-47fb-a7e1-06ac1924af9c" (UID: "66cdccd7-317a-47fb-a7e1-06ac1924af9c"). InnerVolumeSpecName "kube-api-access-xx2vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.477166 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff3fdda-c81d-4b71-967a-d482454d5e3e-kube-api-access-csnsg" (OuterVolumeSpecName: "kube-api-access-csnsg") pod "3ff3fdda-c81d-4b71-967a-d482454d5e3e" (UID: "3ff3fdda-c81d-4b71-967a-d482454d5e3e"). InnerVolumeSpecName "kube-api-access-csnsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.572578 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2nm9\" (UniqueName: \"kubernetes.io/projected/53b94446-0a24-4eaa-ab88-62168ad8c7b7-kube-api-access-x2nm9\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.572607 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cdccd7-317a-47fb-a7e1-06ac1924af9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.572616 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx2vm\" (UniqueName: \"kubernetes.io/projected/66cdccd7-317a-47fb-a7e1-06ac1924af9c-kube-api-access-xx2vm\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.572625 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ff3fdda-c81d-4b71-967a-d482454d5e3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.572634 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csnsg\" (UniqueName: \"kubernetes.io/projected/3ff3fdda-c81d-4b71-967a-d482454d5e3e-kube-api-access-csnsg\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:01 crc kubenswrapper[4889]: I1128 07:07:01.572642 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b94446-0a24-4eaa-ab88-62168ad8c7b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.007535 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75e4-account-create-update-bkcxt" Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.007541 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75e4-account-create-update-bkcxt" event={"ID":"3ff3fdda-c81d-4b71-967a-d482454d5e3e","Type":"ContainerDied","Data":"7529ef0a3de2b3cfc854c9168f5f9601912763c40ecc245e8b780fcf7534480c"} Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.007610 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7529ef0a3de2b3cfc854c9168f5f9601912763c40ecc245e8b780fcf7534480c" Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.012257 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7grgp" event={"ID":"53b94446-0a24-4eaa-ab88-62168ad8c7b7","Type":"ContainerDied","Data":"229a38604bc8bb4d49de1e21a25c5a11be81d1dbf55782cff3df549decef4b3c"} Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.012284 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="229a38604bc8bb4d49de1e21a25c5a11be81d1dbf55782cff3df549decef4b3c" Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.012324 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7grgp" Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.014105 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f2a1-account-create-update-lhfcx" event={"ID":"66cdccd7-317a-47fb-a7e1-06ac1924af9c","Type":"ContainerDied","Data":"a9a2b7b040e7683440dcef5db485d7d4cd5ae1be0253b822122e6e0dea04e460"} Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.014130 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9a2b7b040e7683440dcef5db485d7d4cd5ae1be0253b822122e6e0dea04e460" Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.014171 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f2a1-account-create-update-lhfcx" Nov 28 07:07:02 crc kubenswrapper[4889]: I1128 07:07:02.572631 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.023949 4889 generic.go:334] "Generic (PLEG): container finished" podID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" containerID="278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60" exitCode=0 Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.023987 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b744978-786e-4ab0-8a5c-1e8e3f9a2809","Type":"ContainerDied","Data":"278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60"} Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296205 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8cb4f"] Nov 28 07:07:03 crc kubenswrapper[4889]: E1128 07:07:03.296531 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0713e6-6a1f-45ee-9929-4ab652d46e06" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296548 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0713e6-6a1f-45ee-9929-4ab652d46e06" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: E1128 07:07:03.296568 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66cdccd7-317a-47fb-a7e1-06ac1924af9c" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296575 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="66cdccd7-317a-47fb-a7e1-06ac1924af9c" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: E1128 07:07:03.296587 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" containerName="init" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296593 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" containerName="init" Nov 28 07:07:03 crc kubenswrapper[4889]: E1128 07:07:03.296600 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b94446-0a24-4eaa-ab88-62168ad8c7b7" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296606 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b94446-0a24-4eaa-ab88-62168ad8c7b7" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: E1128 07:07:03.296619 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1382a8ac-8448-45ed-8bd1-74426b1aa746" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296624 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="1382a8ac-8448-45ed-8bd1-74426b1aa746" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: E1128 07:07:03.296635 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e1fb75-c9d4-40d4-97fa-41162ea57360" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296640 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e1fb75-c9d4-40d4-97fa-41162ea57360" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: E1128 07:07:03.296651 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff3fdda-c81d-4b71-967a-d482454d5e3e" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296658 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff3fdda-c81d-4b71-967a-d482454d5e3e" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: E1128 07:07:03.296675 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" containerName="dnsmasq-dns" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296681 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" containerName="dnsmasq-dns" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296866 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0713e6-6a1f-45ee-9929-4ab652d46e06" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296880 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb0efae-f6ec-4212-b26a-5185a1d09c2f" containerName="dnsmasq-dns" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296890 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e1fb75-c9d4-40d4-97fa-41162ea57360" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296902 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff3fdda-c81d-4b71-967a-d482454d5e3e" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296914 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="1382a8ac-8448-45ed-8bd1-74426b1aa746" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296928 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="66cdccd7-317a-47fb-a7e1-06ac1924af9c" containerName="mariadb-account-create-update" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.296941 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b94446-0a24-4eaa-ab88-62168ad8c7b7" containerName="mariadb-database-create" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.297659 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.300287 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.300333 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sp86l" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.307345 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwd9\" (UniqueName: \"kubernetes.io/projected/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-kube-api-access-rcwd9\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.307387 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-db-sync-config-data\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.307737 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-config-data\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.307845 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-combined-ca-bundle\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.318156 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8cb4f"] Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.409985 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-config-data\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.410087 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-combined-ca-bundle\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.410145 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwd9\" (UniqueName: \"kubernetes.io/projected/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-kube-api-access-rcwd9\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.410166 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-db-sync-config-data\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.423425 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-db-sync-config-data\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.423478 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-config-data\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.423764 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-combined-ca-bundle\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.431392 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwd9\" (UniqueName: \"kubernetes.io/projected/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-kube-api-access-rcwd9\") pod \"glance-db-sync-8cb4f\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:03 crc kubenswrapper[4889]: I1128 07:07:03.617514 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:04 crc kubenswrapper[4889]: I1128 07:07:04.034557 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b744978-786e-4ab0-8a5c-1e8e3f9a2809","Type":"ContainerStarted","Data":"c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761"} Nov 28 07:07:04 crc kubenswrapper[4889]: I1128 07:07:04.209187 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8cb4f"] Nov 28 07:07:04 crc kubenswrapper[4889]: W1128 07:07:04.214072 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d6cc417_c977_4f6e_8e9c_b420b524d3d5.slice/crio-f815884490266dc60f7e8356b5fdc23bb7456a3fe28428700de859ec6cca6907 WatchSource:0}: Error finding container f815884490266dc60f7e8356b5fdc23bb7456a3fe28428700de859ec6cca6907: Status 404 returned error can't find the container with id f815884490266dc60f7e8356b5fdc23bb7456a3fe28428700de859ec6cca6907 Nov 28 07:07:05 crc kubenswrapper[4889]: I1128 07:07:05.041853 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8cb4f" event={"ID":"8d6cc417-c977-4f6e-8e9c-b420b524d3d5","Type":"ContainerStarted","Data":"f815884490266dc60f7e8356b5fdc23bb7456a3fe28428700de859ec6cca6907"} Nov 28 07:07:06 crc kubenswrapper[4889]: I1128 07:07:06.883134 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:07:06 crc kubenswrapper[4889]: E1128 07:07:06.883456 4889 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:07:06 crc kubenswrapper[4889]: E1128 07:07:06.883731 4889 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:07:06 crc kubenswrapper[4889]: E1128 07:07:06.883807 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift podName:637e0576-2707-4c19-82d5-837d5e39578a nodeName:}" failed. No retries permitted until 2025-11-28 07:07:22.883783586 +0000 UTC m=+1165.854017741 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift") pod "swift-storage-0" (UID: "637e0576-2707-4c19-82d5-837d5e39578a") : configmap "swift-ring-files" not found Nov 28 07:07:09 crc kubenswrapper[4889]: I1128 07:07:09.106319 4889 generic.go:334] "Generic (PLEG): container finished" podID="5c74af7d-0271-4b1d-8c93-88d33ca6329c" containerID="63d8e75a181bc24f5d07e30475fe1dd420fb0a22f5ad9e8334587097adfb8675" exitCode=0 Nov 28 07:07:09 crc kubenswrapper[4889]: I1128 07:07:09.108082 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6wjv" event={"ID":"5c74af7d-0271-4b1d-8c93-88d33ca6329c","Type":"ContainerDied","Data":"63d8e75a181bc24f5d07e30475fe1dd420fb0a22f5ad9e8334587097adfb8675"} Nov 28 07:07:09 crc kubenswrapper[4889]: I1128 07:07:09.108259 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:07:09 crc kubenswrapper[4889]: I1128 07:07:09.196456 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.020324577 podStartE2EDuration="1m6.196437765s" podCreationTimestamp="2025-11-28 07:06:03 +0000 UTC" firstStartedPulling="2025-11-28 07:06:05.595354855 +0000 UTC m=+1088.565589010" lastFinishedPulling="2025-11-28 07:06:29.771468043 +0000 UTC m=+1112.741702198" observedRunningTime="2025-11-28 07:07:09.164040886 +0000 UTC m=+1152.134275041" watchObservedRunningTime="2025-11-28 07:07:09.196437765 +0000 UTC m=+1152.166671920" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.474841 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.554513 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dlfmr" podUID="723ca26e-f925-47cc-92e3-998ff36f3e92" containerName="ovn-controller" probeResult="failure" output=< Nov 28 07:07:10 crc kubenswrapper[4889]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 07:07:10 crc kubenswrapper[4889]: > Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.558820 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-dispersionconf\") pod \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.558905 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k8zj\" (UniqueName: \"kubernetes.io/projected/5c74af7d-0271-4b1d-8c93-88d33ca6329c-kube-api-access-5k8zj\") pod \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.558929 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-swiftconf\") pod \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.559039 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-scripts\") pod \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.559070 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-combined-ca-bundle\") pod \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.559089 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-ring-data-devices\") pod \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.559144 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c74af7d-0271-4b1d-8c93-88d33ca6329c-etc-swift\") pod \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\" (UID: \"5c74af7d-0271-4b1d-8c93-88d33ca6329c\") " Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.560272 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c74af7d-0271-4b1d-8c93-88d33ca6329c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5c74af7d-0271-4b1d-8c93-88d33ca6329c" (UID: "5c74af7d-0271-4b1d-8c93-88d33ca6329c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.560997 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5c74af7d-0271-4b1d-8c93-88d33ca6329c" (UID: "5c74af7d-0271-4b1d-8c93-88d33ca6329c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.571767 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c74af7d-0271-4b1d-8c93-88d33ca6329c-kube-api-access-5k8zj" (OuterVolumeSpecName: "kube-api-access-5k8zj") pod "5c74af7d-0271-4b1d-8c93-88d33ca6329c" (UID: "5c74af7d-0271-4b1d-8c93-88d33ca6329c"). InnerVolumeSpecName "kube-api-access-5k8zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.575754 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5c74af7d-0271-4b1d-8c93-88d33ca6329c" (UID: "5c74af7d-0271-4b1d-8c93-88d33ca6329c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.591461 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c74af7d-0271-4b1d-8c93-88d33ca6329c" (UID: "5c74af7d-0271-4b1d-8c93-88d33ca6329c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.595906 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5c74af7d-0271-4b1d-8c93-88d33ca6329c" (UID: "5c74af7d-0271-4b1d-8c93-88d33ca6329c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.597441 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-scripts" (OuterVolumeSpecName: "scripts") pod "5c74af7d-0271-4b1d-8c93-88d33ca6329c" (UID: "5c74af7d-0271-4b1d-8c93-88d33ca6329c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.631291 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.638761 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.660776 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k8zj\" (UniqueName: \"kubernetes.io/projected/5c74af7d-0271-4b1d-8c93-88d33ca6329c-kube-api-access-5k8zj\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.660810 4889 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.660819 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.660830 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.660840 4889 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c74af7d-0271-4b1d-8c93-88d33ca6329c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.660849 4889 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c74af7d-0271-4b1d-8c93-88d33ca6329c-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.660857 4889 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c74af7d-0271-4b1d-8c93-88d33ca6329c-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.864762 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dlfmr-config-c77ng"] Nov 28 07:07:10 crc kubenswrapper[4889]: E1128 07:07:10.865140 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c74af7d-0271-4b1d-8c93-88d33ca6329c" containerName="swift-ring-rebalance" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.865159 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c74af7d-0271-4b1d-8c93-88d33ca6329c" containerName="swift-ring-rebalance" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.865321 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c74af7d-0271-4b1d-8c93-88d33ca6329c" containerName="swift-ring-rebalance" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.865936 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.868360 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.876394 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dlfmr-config-c77ng"] Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.965198 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-scripts\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.965343 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run-ovn\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.965396 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-log-ovn\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.965515 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvhj5\" (UniqueName: \"kubernetes.io/projected/a5d0c404-9989-48af-980e-1ef89a8ef3fd-kube-api-access-kvhj5\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.965598 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-additional-scripts\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:10 crc kubenswrapper[4889]: I1128 07:07:10.965759 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.067249 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-additional-scripts\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.067357 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.067427 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-scripts\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.067474 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run-ovn\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.067502 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-log-ovn\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.067551 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvhj5\" (UniqueName: \"kubernetes.io/projected/a5d0c404-9989-48af-980e-1ef89a8ef3fd-kube-api-access-kvhj5\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.068171 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run-ovn\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.068238 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-log-ovn\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.068937 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-additional-scripts\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.069010 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.069698 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-scripts\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.088665 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvhj5\" (UniqueName: \"kubernetes.io/projected/a5d0c404-9989-48af-980e-1ef89a8ef3fd-kube-api-access-kvhj5\") pod \"ovn-controller-dlfmr-config-c77ng\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.129327 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6wjv" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.129430 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6wjv" event={"ID":"5c74af7d-0271-4b1d-8c93-88d33ca6329c","Type":"ContainerDied","Data":"1a3f2ee5d05d9f6a67d8fff412d36f4b7e992e205eb6552b8a5338e56b48415a"} Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.129478 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a3f2ee5d05d9f6a67d8fff412d36f4b7e992e205eb6552b8a5338e56b48415a" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.183861 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:11 crc kubenswrapper[4889]: I1128 07:07:11.887260 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dlfmr-config-c77ng"] Nov 28 07:07:12 crc kubenswrapper[4889]: I1128 07:07:12.139538 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dlfmr-config-c77ng" event={"ID":"a5d0c404-9989-48af-980e-1ef89a8ef3fd","Type":"ContainerStarted","Data":"dcbfe0134b4b0777d855009b151e11d66931f1e2be01d6faa351696849d0f902"} Nov 28 07:07:13 crc kubenswrapper[4889]: E1128 07:07:13.663752 4889 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d501b3_ad2c_4fb8_814d_411dc2a11f20.slice/crio-conmon-0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f.scope\": RecentStats: unable to find data in memory cache]" Nov 28 07:07:14 crc kubenswrapper[4889]: I1128 07:07:14.155607 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dlfmr-config-c77ng" event={"ID":"a5d0c404-9989-48af-980e-1ef89a8ef3fd","Type":"ContainerStarted","Data":"745e1c1cea393b7605682da0ded937a0440ca14c01a32ad3b7645010fdd7508e"} Nov 28 07:07:14 crc kubenswrapper[4889]: I1128 07:07:14.157553 4889 generic.go:334] "Generic (PLEG): container finished" podID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" containerID="0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f" exitCode=0 Nov 28 07:07:14 crc kubenswrapper[4889]: I1128 07:07:14.157583 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90d501b3-ad2c-4fb8-814d-411dc2a11f20","Type":"ContainerDied","Data":"0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f"} Nov 28 07:07:14 crc kubenswrapper[4889]: I1128 07:07:14.182363 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dlfmr-config-c77ng" podStartSLOduration=4.182329079 podStartE2EDuration="4.182329079s" podCreationTimestamp="2025-11-28 07:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:14.172501548 +0000 UTC m=+1157.142735703" watchObservedRunningTime="2025-11-28 07:07:14.182329079 +0000 UTC m=+1157.152563234" Nov 28 07:07:15 crc kubenswrapper[4889]: I1128 07:07:15.080046 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:07:15 crc kubenswrapper[4889]: I1128 07:07:15.178090 4889 generic.go:334] "Generic (PLEG): container finished" podID="a5d0c404-9989-48af-980e-1ef89a8ef3fd" containerID="745e1c1cea393b7605682da0ded937a0440ca14c01a32ad3b7645010fdd7508e" exitCode=0 Nov 28 07:07:15 crc kubenswrapper[4889]: I1128 07:07:15.178138 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dlfmr-config-c77ng" event={"ID":"a5d0c404-9989-48af-980e-1ef89a8ef3fd","Type":"ContainerDied","Data":"745e1c1cea393b7605682da0ded937a0440ca14c01a32ad3b7645010fdd7508e"} Nov 28 07:07:15 crc kubenswrapper[4889]: I1128 07:07:15.580433 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dlfmr" Nov 28 07:07:22 crc kubenswrapper[4889]: I1128 07:07:22.963385 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:07:22 crc kubenswrapper[4889]: I1128 07:07:22.970983 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"swift-storage-0\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " pod="openstack/swift-storage-0" Nov 28 07:07:22 crc kubenswrapper[4889]: E1128 07:07:22.983504 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:bb01899a9f257500587d26856df89b6046d8623ca11e51c1393030d590c80945" Nov 28 07:07:22 crc kubenswrapper[4889]: E1128 07:07:22.983844 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:bb01899a9f257500587d26856df89b6046d8623ca11e51c1393030d590c80945,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcwd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-8cb4f_openstack(8d6cc417-c977-4f6e-8e9c-b420b524d3d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:07:22 crc kubenswrapper[4889]: E1128 07:07:22.985034 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-8cb4f" podUID="8d6cc417-c977-4f6e-8e9c-b420b524d3d5" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.075191 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.090577 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.166887 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-scripts\") pod \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.167018 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run-ovn\") pod \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.167057 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-log-ovn\") pod \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.167091 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvhj5\" (UniqueName: \"kubernetes.io/projected/a5d0c404-9989-48af-980e-1ef89a8ef3fd-kube-api-access-kvhj5\") pod \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.167153 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-additional-scripts\") pod \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.167198 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run\") pod \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\" (UID: \"a5d0c404-9989-48af-980e-1ef89a8ef3fd\") " Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.167630 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run" (OuterVolumeSpecName: "var-run") pod "a5d0c404-9989-48af-980e-1ef89a8ef3fd" (UID: "a5d0c404-9989-48af-980e-1ef89a8ef3fd"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.167670 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a5d0c404-9989-48af-980e-1ef89a8ef3fd" (UID: "a5d0c404-9989-48af-980e-1ef89a8ef3fd"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.167693 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a5d0c404-9989-48af-980e-1ef89a8ef3fd" (UID: "a5d0c404-9989-48af-980e-1ef89a8ef3fd"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.168829 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a5d0c404-9989-48af-980e-1ef89a8ef3fd" (UID: "a5d0c404-9989-48af-980e-1ef89a8ef3fd"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.169112 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-scripts" (OuterVolumeSpecName: "scripts") pod "a5d0c404-9989-48af-980e-1ef89a8ef3fd" (UID: "a5d0c404-9989-48af-980e-1ef89a8ef3fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.174891 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d0c404-9989-48af-980e-1ef89a8ef3fd-kube-api-access-kvhj5" (OuterVolumeSpecName: "kube-api-access-kvhj5") pod "a5d0c404-9989-48af-980e-1ef89a8ef3fd" (UID: "a5d0c404-9989-48af-980e-1ef89a8ef3fd"). InnerVolumeSpecName "kube-api-access-kvhj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.270186 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90d501b3-ad2c-4fb8-814d-411dc2a11f20","Type":"ContainerStarted","Data":"1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a"} Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.271755 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.272560 4889 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.272617 4889 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.272629 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d0c404-9989-48af-980e-1ef89a8ef3fd-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.272639 4889 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.272651 4889 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d0c404-9989-48af-980e-1ef89a8ef3fd-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.272662 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvhj5\" (UniqueName: \"kubernetes.io/projected/a5d0c404-9989-48af-980e-1ef89a8ef3fd-kube-api-access-kvhj5\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.298179 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dlfmr-config-c77ng" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.299901 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dlfmr-config-c77ng" event={"ID":"a5d0c404-9989-48af-980e-1ef89a8ef3fd","Type":"ContainerDied","Data":"dcbfe0134b4b0777d855009b151e11d66931f1e2be01d6faa351696849d0f902"} Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.299944 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcbfe0134b4b0777d855009b151e11d66931f1e2be01d6faa351696849d0f902" Nov 28 07:07:23 crc kubenswrapper[4889]: E1128 07:07:23.299945 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:bb01899a9f257500587d26856df89b6046d8623ca11e51c1393030d590c80945\\\"\"" pod="openstack/glance-db-sync-8cb4f" podUID="8d6cc417-c977-4f6e-8e9c-b420b524d3d5" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.351677 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371956.503117 podStartE2EDuration="1m20.351659223s" podCreationTimestamp="2025-11-28 07:06:03 +0000 UTC" firstStartedPulling="2025-11-28 07:06:05.220123096 +0000 UTC m=+1088.190357261" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:23.313106896 +0000 UTC m=+1166.283341051" watchObservedRunningTime="2025-11-28 07:07:23.351659223 +0000 UTC m=+1166.321893378" Nov 28 07:07:23 crc kubenswrapper[4889]: I1128 07:07:23.800284 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:07:23 crc kubenswrapper[4889]: W1128 07:07:23.800850 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod637e0576_2707_4c19_82d5_837d5e39578a.slice/crio-a06355f237ffd92f316d2b84f24094b54a7ab391a92cd8ca72eb909bdf71abcc WatchSource:0}: Error finding container a06355f237ffd92f316d2b84f24094b54a7ab391a92cd8ca72eb909bdf71abcc: Status 404 returned error can't find the container with id a06355f237ffd92f316d2b84f24094b54a7ab391a92cd8ca72eb909bdf71abcc Nov 28 07:07:24 crc kubenswrapper[4889]: I1128 07:07:24.172219 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dlfmr-config-c77ng"] Nov 28 07:07:24 crc kubenswrapper[4889]: I1128 07:07:24.180775 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dlfmr-config-c77ng"] Nov 28 07:07:24 crc kubenswrapper[4889]: I1128 07:07:24.305164 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"a06355f237ffd92f316d2b84f24094b54a7ab391a92cd8ca72eb909bdf71abcc"} Nov 28 07:07:25 crc kubenswrapper[4889]: I1128 07:07:25.323450 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58"} Nov 28 07:07:25 crc kubenswrapper[4889]: I1128 07:07:25.341636 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d0c404-9989-48af-980e-1ef89a8ef3fd" path="/var/lib/kubelet/pods/a5d0c404-9989-48af-980e-1ef89a8ef3fd/volumes" Nov 28 07:07:26 crc kubenswrapper[4889]: I1128 07:07:26.333567 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc"} Nov 28 07:07:26 crc kubenswrapper[4889]: I1128 07:07:26.334460 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e"} Nov 28 07:07:26 crc kubenswrapper[4889]: I1128 07:07:26.334526 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb"} Nov 28 07:07:28 crc kubenswrapper[4889]: I1128 07:07:28.354681 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3"} Nov 28 07:07:28 crc kubenswrapper[4889]: I1128 07:07:28.355256 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680"} Nov 28 07:07:28 crc kubenswrapper[4889]: I1128 07:07:28.355272 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9"} Nov 28 07:07:30 crc kubenswrapper[4889]: I1128 07:07:30.376130 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f"} Nov 28 07:07:32 crc kubenswrapper[4889]: I1128 07:07:32.406430 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8"} Nov 28 07:07:33 crc kubenswrapper[4889]: I1128 07:07:33.428119 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802"} Nov 28 07:07:33 crc kubenswrapper[4889]: I1128 07:07:33.428492 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd"} Nov 28 07:07:33 crc kubenswrapper[4889]: I1128 07:07:33.428505 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7"} Nov 28 07:07:33 crc kubenswrapper[4889]: I1128 07:07:33.428513 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf"} Nov 28 07:07:33 crc kubenswrapper[4889]: I1128 07:07:33.428521 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b"} Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.441941 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerStarted","Data":"cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086"} Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.491813 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.140731145 podStartE2EDuration="45.491783026s" podCreationTimestamp="2025-11-28 07:06:49 +0000 UTC" firstStartedPulling="2025-11-28 07:07:23.80373841 +0000 UTC m=+1166.773972565" lastFinishedPulling="2025-11-28 07:07:32.154790291 +0000 UTC m=+1175.125024446" observedRunningTime="2025-11-28 07:07:34.48750985 +0000 UTC m=+1177.457744025" watchObservedRunningTime="2025-11-28 07:07:34.491783026 +0000 UTC m=+1177.462017191" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.675959 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.770544 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7568d75687-h7sjj"] Nov 28 07:07:34 crc kubenswrapper[4889]: E1128 07:07:34.771284 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d0c404-9989-48af-980e-1ef89a8ef3fd" containerName="ovn-config" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.771302 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d0c404-9989-48af-980e-1ef89a8ef3fd" containerName="ovn-config" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.771489 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d0c404-9989-48af-980e-1ef89a8ef3fd" containerName="ovn-config" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.774228 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.779684 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.790701 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7568d75687-h7sjj"] Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.876569 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-nb\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.876633 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-swift-storage-0\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.876701 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txgck\" (UniqueName: \"kubernetes.io/projected/5d4065c3-8078-4446-a480-78054208a993-kube-api-access-txgck\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.876877 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-svc\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.877095 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-sb\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.877122 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-config\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.979169 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-sb\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.979223 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-config\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.979275 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-nb\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.979311 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txgck\" (UniqueName: \"kubernetes.io/projected/5d4065c3-8078-4446-a480-78054208a993-kube-api-access-txgck\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.979331 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-swift-storage-0\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.979373 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-svc\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.980642 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-svc\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.980651 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-config\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.980720 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-swift-storage-0\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.980841 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-nb\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.981280 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-sb\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.984417 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-47lc7"] Nov 28 07:07:34 crc kubenswrapper[4889]: I1128 07:07:34.985534 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.002273 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-47lc7"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.044371 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txgck\" (UniqueName: \"kubernetes.io/projected/5d4065c3-8078-4446-a480-78054208a993-kube-api-access-txgck\") pod \"dnsmasq-dns-7568d75687-h7sjj\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.080742 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gwhb\" (UniqueName: \"kubernetes.io/projected/7aee8230-fc0c-4f50-a805-23331b345013-kube-api-access-6gwhb\") pod \"cinder-db-create-47lc7\" (UID: \"7aee8230-fc0c-4f50-a805-23331b345013\") " pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.081155 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aee8230-fc0c-4f50-a805-23331b345013-operator-scripts\") pod \"cinder-db-create-47lc7\" (UID: \"7aee8230-fc0c-4f50-a805-23331b345013\") " pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.093056 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hgc8n"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.094309 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.097308 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.104657 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6a1b-account-create-update-q5wnf"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.128666 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.130800 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.144913 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hgc8n"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.165046 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6a1b-account-create-update-q5wnf"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.182897 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-operator-scripts\") pod \"barbican-db-create-hgc8n\" (UID: \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\") " pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.182954 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c6935e-abd3-4da3-aa74-662c45289641-operator-scripts\") pod \"cinder-6a1b-account-create-update-q5wnf\" (UID: \"07c6935e-abd3-4da3-aa74-662c45289641\") " pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.182984 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aee8230-fc0c-4f50-a805-23331b345013-operator-scripts\") pod \"cinder-db-create-47lc7\" (UID: \"7aee8230-fc0c-4f50-a805-23331b345013\") " pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.183035 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gwhb\" (UniqueName: \"kubernetes.io/projected/7aee8230-fc0c-4f50-a805-23331b345013-kube-api-access-6gwhb\") pod \"cinder-db-create-47lc7\" (UID: \"7aee8230-fc0c-4f50-a805-23331b345013\") " pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.183102 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbmv\" (UniqueName: \"kubernetes.io/projected/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-kube-api-access-hqbmv\") pod \"barbican-db-create-hgc8n\" (UID: \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\") " pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.183167 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff2mn\" (UniqueName: \"kubernetes.io/projected/07c6935e-abd3-4da3-aa74-662c45289641-kube-api-access-ff2mn\") pod \"cinder-6a1b-account-create-update-q5wnf\" (UID: \"07c6935e-abd3-4da3-aa74-662c45289641\") " pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.184130 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aee8230-fc0c-4f50-a805-23331b345013-operator-scripts\") pod \"cinder-db-create-47lc7\" (UID: \"7aee8230-fc0c-4f50-a805-23331b345013\") " pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.211608 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gwhb\" (UniqueName: \"kubernetes.io/projected/7aee8230-fc0c-4f50-a805-23331b345013-kube-api-access-6gwhb\") pod \"cinder-db-create-47lc7\" (UID: \"7aee8230-fc0c-4f50-a805-23331b345013\") " pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.213945 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9d4e-account-create-update-skdhq"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.215449 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.218915 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.237044 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9d4e-account-create-update-skdhq"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.284681 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff2mn\" (UniqueName: \"kubernetes.io/projected/07c6935e-abd3-4da3-aa74-662c45289641-kube-api-access-ff2mn\") pod \"cinder-6a1b-account-create-update-q5wnf\" (UID: \"07c6935e-abd3-4da3-aa74-662c45289641\") " pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.284755 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e500e0f-f629-484f-b1c5-2e1b254bcee4-operator-scripts\") pod \"barbican-9d4e-account-create-update-skdhq\" (UID: \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\") " pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.284850 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-operator-scripts\") pod \"barbican-db-create-hgc8n\" (UID: \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\") " pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.284899 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c6935e-abd3-4da3-aa74-662c45289641-operator-scripts\") pod \"cinder-6a1b-account-create-update-q5wnf\" (UID: \"07c6935e-abd3-4da3-aa74-662c45289641\") " pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.284985 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffx6r\" (UniqueName: \"kubernetes.io/projected/5e500e0f-f629-484f-b1c5-2e1b254bcee4-kube-api-access-ffx6r\") pod \"barbican-9d4e-account-create-update-skdhq\" (UID: \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\") " pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.285069 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqbmv\" (UniqueName: \"kubernetes.io/projected/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-kube-api-access-hqbmv\") pod \"barbican-db-create-hgc8n\" (UID: \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\") " pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.285667 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-operator-scripts\") pod \"barbican-db-create-hgc8n\" (UID: \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\") " pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.286904 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c6935e-abd3-4da3-aa74-662c45289641-operator-scripts\") pod \"cinder-6a1b-account-create-update-q5wnf\" (UID: \"07c6935e-abd3-4da3-aa74-662c45289641\") " pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.304641 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.308219 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqbmv\" (UniqueName: \"kubernetes.io/projected/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-kube-api-access-hqbmv\") pod \"barbican-db-create-hgc8n\" (UID: \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\") " pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.350292 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff2mn\" (UniqueName: \"kubernetes.io/projected/07c6935e-abd3-4da3-aa74-662c45289641-kube-api-access-ff2mn\") pod \"cinder-6a1b-account-create-update-q5wnf\" (UID: \"07c6935e-abd3-4da3-aa74-662c45289641\") " pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.384517 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-r6j84"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.386178 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.389419 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.389643 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.389831 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.389953 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j7bgn" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.393403 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffx6r\" (UniqueName: \"kubernetes.io/projected/5e500e0f-f629-484f-b1c5-2e1b254bcee4-kube-api-access-ffx6r\") pod \"barbican-9d4e-account-create-update-skdhq\" (UID: \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\") " pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.393882 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e500e0f-f629-484f-b1c5-2e1b254bcee4-operator-scripts\") pod \"barbican-9d4e-account-create-update-skdhq\" (UID: \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\") " pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.394386 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r6j84"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.395316 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e500e0f-f629-484f-b1c5-2e1b254bcee4-operator-scripts\") pod \"barbican-9d4e-account-create-update-skdhq\" (UID: \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\") " pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.412743 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.415086 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffx6r\" (UniqueName: \"kubernetes.io/projected/5e500e0f-f629-484f-b1c5-2e1b254bcee4-kube-api-access-ffx6r\") pod \"barbican-9d4e-account-create-update-skdhq\" (UID: \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\") " pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.498123 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.499275 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htv22\" (UniqueName: \"kubernetes.io/projected/38118581-7a75-43c6-82b5-cbcf739b47b8-kube-api-access-htv22\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.499316 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-config-data\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.499381 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-combined-ca-bundle\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.515687 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4xrwj"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.517145 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.527606 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-08e6-account-create-update-l98s2"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.528689 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.533303 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.539764 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4xrwj"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.548495 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-08e6-account-create-update-l98s2"] Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.570692 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.601035 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-combined-ca-bundle\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.601405 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htv22\" (UniqueName: \"kubernetes.io/projected/38118581-7a75-43c6-82b5-cbcf739b47b8-kube-api-access-htv22\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.601437 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-config-data\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.608126 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-config-data\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.608623 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-combined-ca-bundle\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.633539 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htv22\" (UniqueName: \"kubernetes.io/projected/38118581-7a75-43c6-82b5-cbcf739b47b8-kube-api-access-htv22\") pod \"keystone-db-sync-r6j84\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.671238 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7568d75687-h7sjj"] Nov 28 07:07:35 crc kubenswrapper[4889]: W1128 07:07:35.700080 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d4065c3_8078_4446_a480_78054208a993.slice/crio-0f03db7fcd8fccf2583352cd98d9bb6d77f7f14fb16bcb63fdf8bf589617fefa WatchSource:0}: Error finding container 0f03db7fcd8fccf2583352cd98d9bb6d77f7f14fb16bcb63fdf8bf589617fefa: Status 404 returned error can't find the container with id 0f03db7fcd8fccf2583352cd98d9bb6d77f7f14fb16bcb63fdf8bf589617fefa Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.703367 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxcng\" (UniqueName: \"kubernetes.io/projected/74cd0e57-f855-4399-b02c-a8740d0e31b7-kube-api-access-bxcng\") pod \"neutron-db-create-4xrwj\" (UID: \"74cd0e57-f855-4399-b02c-a8740d0e31b7\") " pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.703517 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx44m\" (UniqueName: \"kubernetes.io/projected/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-kube-api-access-bx44m\") pod \"neutron-08e6-account-create-update-l98s2\" (UID: \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\") " pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.703589 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74cd0e57-f855-4399-b02c-a8740d0e31b7-operator-scripts\") pod \"neutron-db-create-4xrwj\" (UID: \"74cd0e57-f855-4399-b02c-a8740d0e31b7\") " pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.703623 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-operator-scripts\") pod \"neutron-08e6-account-create-update-l98s2\" (UID: \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\") " pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.709145 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.805602 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxcng\" (UniqueName: \"kubernetes.io/projected/74cd0e57-f855-4399-b02c-a8740d0e31b7-kube-api-access-bxcng\") pod \"neutron-db-create-4xrwj\" (UID: \"74cd0e57-f855-4399-b02c-a8740d0e31b7\") " pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.805678 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx44m\" (UniqueName: \"kubernetes.io/projected/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-kube-api-access-bx44m\") pod \"neutron-08e6-account-create-update-l98s2\" (UID: \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\") " pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.805728 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74cd0e57-f855-4399-b02c-a8740d0e31b7-operator-scripts\") pod \"neutron-db-create-4xrwj\" (UID: \"74cd0e57-f855-4399-b02c-a8740d0e31b7\") " pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.805752 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-operator-scripts\") pod \"neutron-08e6-account-create-update-l98s2\" (UID: \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\") " pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.806486 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-operator-scripts\") pod \"neutron-08e6-account-create-update-l98s2\" (UID: \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\") " pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.807031 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74cd0e57-f855-4399-b02c-a8740d0e31b7-operator-scripts\") pod \"neutron-db-create-4xrwj\" (UID: \"74cd0e57-f855-4399-b02c-a8740d0e31b7\") " pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.829600 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxcng\" (UniqueName: \"kubernetes.io/projected/74cd0e57-f855-4399-b02c-a8740d0e31b7-kube-api-access-bxcng\") pod \"neutron-db-create-4xrwj\" (UID: \"74cd0e57-f855-4399-b02c-a8740d0e31b7\") " pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.830284 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx44m\" (UniqueName: \"kubernetes.io/projected/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-kube-api-access-bx44m\") pod \"neutron-08e6-account-create-update-l98s2\" (UID: \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\") " pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.844305 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.866396 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:35 crc kubenswrapper[4889]: I1128 07:07:35.890056 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-47lc7"] Nov 28 07:07:35 crc kubenswrapper[4889]: W1128 07:07:35.910464 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7aee8230_fc0c_4f50_a805_23331b345013.slice/crio-b75cd908761110851983fa830abcfd02767fcb60ef43e886b52265737d8f1761 WatchSource:0}: Error finding container b75cd908761110851983fa830abcfd02767fcb60ef43e886b52265737d8f1761: Status 404 returned error can't find the container with id b75cd908761110851983fa830abcfd02767fcb60ef43e886b52265737d8f1761 Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.005500 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hgc8n"] Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.117151 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6a1b-account-create-update-q5wnf"] Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.177805 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9d4e-account-create-update-skdhq"] Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.323047 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r6j84"] Nov 28 07:07:36 crc kubenswrapper[4889]: W1128 07:07:36.326162 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38118581_7a75_43c6_82b5_cbcf739b47b8.slice/crio-d9440e180e0f67ccdf7c5720b9d277e0f45903fe9b77694e98a9091f43eb8b85 WatchSource:0}: Error finding container d9440e180e0f67ccdf7c5720b9d277e0f45903fe9b77694e98a9091f43eb8b85: Status 404 returned error can't find the container with id d9440e180e0f67ccdf7c5720b9d277e0f45903fe9b77694e98a9091f43eb8b85 Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.399398 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4xrwj"] Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.410308 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-08e6-account-create-update-l98s2"] Nov 28 07:07:36 crc kubenswrapper[4889]: W1128 07:07:36.415382 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2b78a6_88b2_4a68_86e4_a5e07ac24456.slice/crio-01f1746b0fd4dd223e0f97991ccf5c3b5257969b16df4285a86dd8dcb4d87682 WatchSource:0}: Error finding container 01f1746b0fd4dd223e0f97991ccf5c3b5257969b16df4285a86dd8dcb4d87682: Status 404 returned error can't find the container with id 01f1746b0fd4dd223e0f97991ccf5c3b5257969b16df4285a86dd8dcb4d87682 Nov 28 07:07:36 crc kubenswrapper[4889]: W1128 07:07:36.415824 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74cd0e57_f855_4399_b02c_a8740d0e31b7.slice/crio-065791021e38f5db57b605553e0df06bad87ca101c0852b14ad481090e6853b9 WatchSource:0}: Error finding container 065791021e38f5db57b605553e0df06bad87ca101c0852b14ad481090e6853b9: Status 404 returned error can't find the container with id 065791021e38f5db57b605553e0df06bad87ca101c0852b14ad481090e6853b9 Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.506449 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r6j84" event={"ID":"38118581-7a75-43c6-82b5-cbcf739b47b8","Type":"ContainerStarted","Data":"d9440e180e0f67ccdf7c5720b9d277e0f45903fe9b77694e98a9091f43eb8b85"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.514448 4889 generic.go:334] "Generic (PLEG): container finished" podID="7aee8230-fc0c-4f50-a805-23331b345013" containerID="1b137e75cb748bc2c3a15eb06dd5c410700da10a7dd8199d7ded055b04a1974c" exitCode=0 Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.514525 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-47lc7" event={"ID":"7aee8230-fc0c-4f50-a805-23331b345013","Type":"ContainerDied","Data":"1b137e75cb748bc2c3a15eb06dd5c410700da10a7dd8199d7ded055b04a1974c"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.514552 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-47lc7" event={"ID":"7aee8230-fc0c-4f50-a805-23331b345013","Type":"ContainerStarted","Data":"b75cd908761110851983fa830abcfd02767fcb60ef43e886b52265737d8f1761"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.518201 4889 generic.go:334] "Generic (PLEG): container finished" podID="5d4065c3-8078-4446-a480-78054208a993" containerID="1e85dcd1deb990841d2aae87fd781728a3695f0a5331ae9f0d27e49767050bb5" exitCode=0 Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.518306 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" event={"ID":"5d4065c3-8078-4446-a480-78054208a993","Type":"ContainerDied","Data":"1e85dcd1deb990841d2aae87fd781728a3695f0a5331ae9f0d27e49767050bb5"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.518348 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" event={"ID":"5d4065c3-8078-4446-a480-78054208a993","Type":"ContainerStarted","Data":"0f03db7fcd8fccf2583352cd98d9bb6d77f7f14fb16bcb63fdf8bf589617fefa"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.525397 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4xrwj" event={"ID":"74cd0e57-f855-4399-b02c-a8740d0e31b7","Type":"ContainerStarted","Data":"065791021e38f5db57b605553e0df06bad87ca101c0852b14ad481090e6853b9"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.543291 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9d4e-account-create-update-skdhq" event={"ID":"5e500e0f-f629-484f-b1c5-2e1b254bcee4","Type":"ContainerStarted","Data":"93cb5748e663fc4799cc331b48ae61183b634dd203239a64cdd7bbc3cd19d38a"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.543328 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9d4e-account-create-update-skdhq" event={"ID":"5e500e0f-f629-484f-b1c5-2e1b254bcee4","Type":"ContainerStarted","Data":"bfa421f700730879ad1a04712bdcb841fe811188851854892b4b8cac675aaef8"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.549272 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6a1b-account-create-update-q5wnf" event={"ID":"07c6935e-abd3-4da3-aa74-662c45289641","Type":"ContainerStarted","Data":"f696dfe1fc9240b4c6f08cc823ad35ff28c6580eec8b16111f5f28555f902723"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.549309 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6a1b-account-create-update-q5wnf" event={"ID":"07c6935e-abd3-4da3-aa74-662c45289641","Type":"ContainerStarted","Data":"34ce7c2644940b953d82907aff7ececd07c04df8ffe29c58d5d127c4996db7fe"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.552674 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hgc8n" event={"ID":"ac655aad-d9b7-47d0-b4b6-5f8904f5b925","Type":"ContainerStarted","Data":"fd64288fdc055b1b763c5730ad8c45a2539a208bf63a18c6c7f0721e2b9508fb"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.552726 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hgc8n" event={"ID":"ac655aad-d9b7-47d0-b4b6-5f8904f5b925","Type":"ContainerStarted","Data":"04e48e2a81445d567c21230ca4ace4ebde5fa6c6de0da74a73701b4d9ca97ede"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.556351 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-08e6-account-create-update-l98s2" event={"ID":"bc2b78a6-88b2-4a68-86e4-a5e07ac24456","Type":"ContainerStarted","Data":"01f1746b0fd4dd223e0f97991ccf5c3b5257969b16df4285a86dd8dcb4d87682"} Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.614610 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-9d4e-account-create-update-skdhq" podStartSLOduration=1.6145896739999999 podStartE2EDuration="1.614589674s" podCreationTimestamp="2025-11-28 07:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:36.589471519 +0000 UTC m=+1179.559705684" watchObservedRunningTime="2025-11-28 07:07:36.614589674 +0000 UTC m=+1179.584823829" Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.670917 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6a1b-account-create-update-q5wnf" podStartSLOduration=1.6708977699999998 podStartE2EDuration="1.67089777s" podCreationTimestamp="2025-11-28 07:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:36.633020118 +0000 UTC m=+1179.603254283" watchObservedRunningTime="2025-11-28 07:07:36.67089777 +0000 UTC m=+1179.641131925" Nov 28 07:07:36 crc kubenswrapper[4889]: I1128 07:07:36.676195 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-hgc8n" podStartSLOduration=1.676170849 podStartE2EDuration="1.676170849s" podCreationTimestamp="2025-11-28 07:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:36.673606721 +0000 UTC m=+1179.643840866" watchObservedRunningTime="2025-11-28 07:07:36.676170849 +0000 UTC m=+1179.646405004" Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.570963 4889 generic.go:334] "Generic (PLEG): container finished" podID="74cd0e57-f855-4399-b02c-a8740d0e31b7" containerID="80c6cc6184e8e38640ca0ca9b5302f442f97049739e7de3303b5b38510b2b6a5" exitCode=0 Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.571007 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4xrwj" event={"ID":"74cd0e57-f855-4399-b02c-a8740d0e31b7","Type":"ContainerDied","Data":"80c6cc6184e8e38640ca0ca9b5302f442f97049739e7de3303b5b38510b2b6a5"} Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.574108 4889 generic.go:334] "Generic (PLEG): container finished" podID="5e500e0f-f629-484f-b1c5-2e1b254bcee4" containerID="93cb5748e663fc4799cc331b48ae61183b634dd203239a64cdd7bbc3cd19d38a" exitCode=0 Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.574167 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9d4e-account-create-update-skdhq" event={"ID":"5e500e0f-f629-484f-b1c5-2e1b254bcee4","Type":"ContainerDied","Data":"93cb5748e663fc4799cc331b48ae61183b634dd203239a64cdd7bbc3cd19d38a"} Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.577121 4889 generic.go:334] "Generic (PLEG): container finished" podID="07c6935e-abd3-4da3-aa74-662c45289641" containerID="f696dfe1fc9240b4c6f08cc823ad35ff28c6580eec8b16111f5f28555f902723" exitCode=0 Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.577209 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6a1b-account-create-update-q5wnf" event={"ID":"07c6935e-abd3-4da3-aa74-662c45289641","Type":"ContainerDied","Data":"f696dfe1fc9240b4c6f08cc823ad35ff28c6580eec8b16111f5f28555f902723"} Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.579000 4889 generic.go:334] "Generic (PLEG): container finished" podID="ac655aad-d9b7-47d0-b4b6-5f8904f5b925" containerID="fd64288fdc055b1b763c5730ad8c45a2539a208bf63a18c6c7f0721e2b9508fb" exitCode=0 Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.579041 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hgc8n" event={"ID":"ac655aad-d9b7-47d0-b4b6-5f8904f5b925","Type":"ContainerDied","Data":"fd64288fdc055b1b763c5730ad8c45a2539a208bf63a18c6c7f0721e2b9508fb"} Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.581807 4889 generic.go:334] "Generic (PLEG): container finished" podID="bc2b78a6-88b2-4a68-86e4-a5e07ac24456" containerID="a3cf8b359e5f2daa8ae56b9273e1c2cfcd41a520a10500d23968f9027d5280a5" exitCode=0 Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.581884 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-08e6-account-create-update-l98s2" event={"ID":"bc2b78a6-88b2-4a68-86e4-a5e07ac24456","Type":"ContainerDied","Data":"a3cf8b359e5f2daa8ae56b9273e1c2cfcd41a520a10500d23968f9027d5280a5"} Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.600005 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" event={"ID":"5d4065c3-8078-4446-a480-78054208a993","Type":"ContainerStarted","Data":"961dff94a3f0a3b5b17904d74804729ed1b0381721f45560e6b3844e2fed7819"} Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.600079 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.676950 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" podStartSLOduration=3.676925014 podStartE2EDuration="3.676925014s" podCreationTimestamp="2025-11-28 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:37.668128937 +0000 UTC m=+1180.638363092" watchObservedRunningTime="2025-11-28 07:07:37.676925014 +0000 UTC m=+1180.647159169" Nov 28 07:07:37 crc kubenswrapper[4889]: I1128 07:07:37.921930 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.055412 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aee8230-fc0c-4f50-a805-23331b345013-operator-scripts\") pod \"7aee8230-fc0c-4f50-a805-23331b345013\" (UID: \"7aee8230-fc0c-4f50-a805-23331b345013\") " Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.055612 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gwhb\" (UniqueName: \"kubernetes.io/projected/7aee8230-fc0c-4f50-a805-23331b345013-kube-api-access-6gwhb\") pod \"7aee8230-fc0c-4f50-a805-23331b345013\" (UID: \"7aee8230-fc0c-4f50-a805-23331b345013\") " Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.056983 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aee8230-fc0c-4f50-a805-23331b345013-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7aee8230-fc0c-4f50-a805-23331b345013" (UID: "7aee8230-fc0c-4f50-a805-23331b345013"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.063537 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aee8230-fc0c-4f50-a805-23331b345013-kube-api-access-6gwhb" (OuterVolumeSpecName: "kube-api-access-6gwhb") pod "7aee8230-fc0c-4f50-a805-23331b345013" (UID: "7aee8230-fc0c-4f50-a805-23331b345013"). InnerVolumeSpecName "kube-api-access-6gwhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.157815 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gwhb\" (UniqueName: \"kubernetes.io/projected/7aee8230-fc0c-4f50-a805-23331b345013-kube-api-access-6gwhb\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.157854 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aee8230-fc0c-4f50-a805-23331b345013-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.612734 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-47lc7" event={"ID":"7aee8230-fc0c-4f50-a805-23331b345013","Type":"ContainerDied","Data":"b75cd908761110851983fa830abcfd02767fcb60ef43e886b52265737d8f1761"} Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.613051 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b75cd908761110851983fa830abcfd02767fcb60ef43e886b52265737d8f1761" Nov 28 07:07:38 crc kubenswrapper[4889]: I1128 07:07:38.612851 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-47lc7" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.031581 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.040859 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.157651 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.175833 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx44m\" (UniqueName: \"kubernetes.io/projected/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-kube-api-access-bx44m\") pod \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\" (UID: \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.176349 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-operator-scripts\") pod \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\" (UID: \"bc2b78a6-88b2-4a68-86e4-a5e07ac24456\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.176380 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff2mn\" (UniqueName: \"kubernetes.io/projected/07c6935e-abd3-4da3-aa74-662c45289641-kube-api-access-ff2mn\") pod \"07c6935e-abd3-4da3-aa74-662c45289641\" (UID: \"07c6935e-abd3-4da3-aa74-662c45289641\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.177680 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c6935e-abd3-4da3-aa74-662c45289641-operator-scripts\") pod \"07c6935e-abd3-4da3-aa74-662c45289641\" (UID: \"07c6935e-abd3-4da3-aa74-662c45289641\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.179539 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c6935e-abd3-4da3-aa74-662c45289641-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07c6935e-abd3-4da3-aa74-662c45289641" (UID: "07c6935e-abd3-4da3-aa74-662c45289641"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.180005 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc2b78a6-88b2-4a68-86e4-a5e07ac24456" (UID: "bc2b78a6-88b2-4a68-86e4-a5e07ac24456"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.180164 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-kube-api-access-bx44m" (OuterVolumeSpecName: "kube-api-access-bx44m") pod "bc2b78a6-88b2-4a68-86e4-a5e07ac24456" (UID: "bc2b78a6-88b2-4a68-86e4-a5e07ac24456"). InnerVolumeSpecName "kube-api-access-bx44m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.182654 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c6935e-abd3-4da3-aa74-662c45289641-kube-api-access-ff2mn" (OuterVolumeSpecName: "kube-api-access-ff2mn") pod "07c6935e-abd3-4da3-aa74-662c45289641" (UID: "07c6935e-abd3-4da3-aa74-662c45289641"). InnerVolumeSpecName "kube-api-access-ff2mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.218737 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.224821 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.281261 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74cd0e57-f855-4399-b02c-a8740d0e31b7-operator-scripts\") pod \"74cd0e57-f855-4399-b02c-a8740d0e31b7\" (UID: \"74cd0e57-f855-4399-b02c-a8740d0e31b7\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.281325 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxcng\" (UniqueName: \"kubernetes.io/projected/74cd0e57-f855-4399-b02c-a8740d0e31b7-kube-api-access-bxcng\") pod \"74cd0e57-f855-4399-b02c-a8740d0e31b7\" (UID: \"74cd0e57-f855-4399-b02c-a8740d0e31b7\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.281947 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx44m\" (UniqueName: \"kubernetes.io/projected/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-kube-api-access-bx44m\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.281968 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2b78a6-88b2-4a68-86e4-a5e07ac24456-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.281980 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff2mn\" (UniqueName: \"kubernetes.io/projected/07c6935e-abd3-4da3-aa74-662c45289641-kube-api-access-ff2mn\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.281991 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c6935e-abd3-4da3-aa74-662c45289641-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.284022 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74cd0e57-f855-4399-b02c-a8740d0e31b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74cd0e57-f855-4399-b02c-a8740d0e31b7" (UID: "74cd0e57-f855-4399-b02c-a8740d0e31b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.285850 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74cd0e57-f855-4399-b02c-a8740d0e31b7-kube-api-access-bxcng" (OuterVolumeSpecName: "kube-api-access-bxcng") pod "74cd0e57-f855-4399-b02c-a8740d0e31b7" (UID: "74cd0e57-f855-4399-b02c-a8740d0e31b7"). InnerVolumeSpecName "kube-api-access-bxcng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.383536 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqbmv\" (UniqueName: \"kubernetes.io/projected/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-kube-api-access-hqbmv\") pod \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\" (UID: \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.383814 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e500e0f-f629-484f-b1c5-2e1b254bcee4-operator-scripts\") pod \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\" (UID: \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.383871 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-operator-scripts\") pod \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\" (UID: \"ac655aad-d9b7-47d0-b4b6-5f8904f5b925\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.383909 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffx6r\" (UniqueName: \"kubernetes.io/projected/5e500e0f-f629-484f-b1c5-2e1b254bcee4-kube-api-access-ffx6r\") pod \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\" (UID: \"5e500e0f-f629-484f-b1c5-2e1b254bcee4\") " Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.384393 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e500e0f-f629-484f-b1c5-2e1b254bcee4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e500e0f-f629-484f-b1c5-2e1b254bcee4" (UID: "5e500e0f-f629-484f-b1c5-2e1b254bcee4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.384749 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac655aad-d9b7-47d0-b4b6-5f8904f5b925" (UID: "ac655aad-d9b7-47d0-b4b6-5f8904f5b925"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.386505 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-kube-api-access-hqbmv" (OuterVolumeSpecName: "kube-api-access-hqbmv") pod "ac655aad-d9b7-47d0-b4b6-5f8904f5b925" (UID: "ac655aad-d9b7-47d0-b4b6-5f8904f5b925"). InnerVolumeSpecName "kube-api-access-hqbmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.387827 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e500e0f-f629-484f-b1c5-2e1b254bcee4-kube-api-access-ffx6r" (OuterVolumeSpecName: "kube-api-access-ffx6r") pod "5e500e0f-f629-484f-b1c5-2e1b254bcee4" (UID: "5e500e0f-f629-484f-b1c5-2e1b254bcee4"). InnerVolumeSpecName "kube-api-access-ffx6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.393190 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e500e0f-f629-484f-b1c5-2e1b254bcee4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.393227 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.393240 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffx6r\" (UniqueName: \"kubernetes.io/projected/5e500e0f-f629-484f-b1c5-2e1b254bcee4-kube-api-access-ffx6r\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.393255 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqbmv\" (UniqueName: \"kubernetes.io/projected/ac655aad-d9b7-47d0-b4b6-5f8904f5b925-kube-api-access-hqbmv\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.393267 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74cd0e57-f855-4399-b02c-a8740d0e31b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.393277 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxcng\" (UniqueName: \"kubernetes.io/projected/74cd0e57-f855-4399-b02c-a8740d0e31b7-kube-api-access-bxcng\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.646675 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hgc8n" event={"ID":"ac655aad-d9b7-47d0-b4b6-5f8904f5b925","Type":"ContainerDied","Data":"04e48e2a81445d567c21230ca4ace4ebde5fa6c6de0da74a73701b4d9ca97ede"} Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.646737 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e48e2a81445d567c21230ca4ace4ebde5fa6c6de0da74a73701b4d9ca97ede" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.646808 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hgc8n" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.679280 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6a1b-account-create-update-q5wnf" event={"ID":"07c6935e-abd3-4da3-aa74-662c45289641","Type":"ContainerDied","Data":"34ce7c2644940b953d82907aff7ececd07c04df8ffe29c58d5d127c4996db7fe"} Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.679326 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ce7c2644940b953d82907aff7ececd07c04df8ffe29c58d5d127c4996db7fe" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.679411 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6a1b-account-create-update-q5wnf" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.702484 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-08e6-account-create-update-l98s2" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.702590 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-08e6-account-create-update-l98s2" event={"ID":"bc2b78a6-88b2-4a68-86e4-a5e07ac24456","Type":"ContainerDied","Data":"01f1746b0fd4dd223e0f97991ccf5c3b5257969b16df4285a86dd8dcb4d87682"} Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.702625 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f1746b0fd4dd223e0f97991ccf5c3b5257969b16df4285a86dd8dcb4d87682" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.712468 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4xrwj" event={"ID":"74cd0e57-f855-4399-b02c-a8740d0e31b7","Type":"ContainerDied","Data":"065791021e38f5db57b605553e0df06bad87ca101c0852b14ad481090e6853b9"} Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.712503 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065791021e38f5db57b605553e0df06bad87ca101c0852b14ad481090e6853b9" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.712555 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4xrwj" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.714952 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9d4e-account-create-update-skdhq" Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.715056 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9d4e-account-create-update-skdhq" event={"ID":"5e500e0f-f629-484f-b1c5-2e1b254bcee4","Type":"ContainerDied","Data":"bfa421f700730879ad1a04712bdcb841fe811188851854892b4b8cac675aaef8"} Nov 28 07:07:39 crc kubenswrapper[4889]: I1128 07:07:39.715088 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa421f700730879ad1a04712bdcb841fe811188851854892b4b8cac675aaef8" Nov 28 07:07:43 crc kubenswrapper[4889]: I1128 07:07:43.772292 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8cb4f" event={"ID":"8d6cc417-c977-4f6e-8e9c-b420b524d3d5","Type":"ContainerStarted","Data":"23f7858730740d33a3badd825c56dd3177715fcfdc6cdc319dbc0205c639dc27"} Nov 28 07:07:43 crc kubenswrapper[4889]: I1128 07:07:43.773670 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r6j84" event={"ID":"38118581-7a75-43c6-82b5-cbcf739b47b8","Type":"ContainerStarted","Data":"ca8b62caf3e2fcb8383263600f9166e50b9e8e2684b835083a8ce3701a719aa2"} Nov 28 07:07:43 crc kubenswrapper[4889]: I1128 07:07:43.790897 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8cb4f" podStartSLOduration=5.854243949 podStartE2EDuration="40.790877637s" podCreationTimestamp="2025-11-28 07:07:03 +0000 UTC" firstStartedPulling="2025-11-28 07:07:04.216263078 +0000 UTC m=+1147.186497233" lastFinishedPulling="2025-11-28 07:07:39.152896766 +0000 UTC m=+1182.123130921" observedRunningTime="2025-11-28 07:07:43.787863799 +0000 UTC m=+1186.758097954" watchObservedRunningTime="2025-11-28 07:07:43.790877637 +0000 UTC m=+1186.761111792" Nov 28 07:07:43 crc kubenswrapper[4889]: I1128 07:07:43.814137 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-r6j84" podStartSLOduration=2.469733585 podStartE2EDuration="8.81411423s" podCreationTimestamp="2025-11-28 07:07:35 +0000 UTC" firstStartedPulling="2025-11-28 07:07:36.328229484 +0000 UTC m=+1179.298463639" lastFinishedPulling="2025-11-28 07:07:42.672610129 +0000 UTC m=+1185.642844284" observedRunningTime="2025-11-28 07:07:43.806789355 +0000 UTC m=+1186.777023520" watchObservedRunningTime="2025-11-28 07:07:43.81411423 +0000 UTC m=+1186.784348385" Nov 28 07:07:45 crc kubenswrapper[4889]: I1128 07:07:45.098845 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:07:45 crc kubenswrapper[4889]: I1128 07:07:45.170731 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f6d79597f-rtnp6"] Nov 28 07:07:45 crc kubenswrapper[4889]: I1128 07:07:45.174880 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" podUID="d9beecee-59b4-475a-bcb2-9c37360314f8" containerName="dnsmasq-dns" containerID="cri-o://ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a" gracePeriod=10 Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.648156 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.790588 4889 generic.go:334] "Generic (PLEG): container finished" podID="d9beecee-59b4-475a-bcb2-9c37360314f8" containerID="ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a" exitCode=0 Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.790625 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" event={"ID":"d9beecee-59b4-475a-bcb2-9c37360314f8","Type":"ContainerDied","Data":"ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a"} Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.790651 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" event={"ID":"d9beecee-59b4-475a-bcb2-9c37360314f8","Type":"ContainerDied","Data":"15cb8b995af2d305f9e721544f934c3354bb20d6c7f280cb8878fc5620802a83"} Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.790671 4889 scope.go:117] "RemoveContainer" containerID="ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.790765 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6d79597f-rtnp6" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.814788 4889 scope.go:117] "RemoveContainer" containerID="12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.822352 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-nb\") pod \"d9beecee-59b4-475a-bcb2-9c37360314f8\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.823208 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rdrp\" (UniqueName: \"kubernetes.io/projected/d9beecee-59b4-475a-bcb2-9c37360314f8-kube-api-access-2rdrp\") pod \"d9beecee-59b4-475a-bcb2-9c37360314f8\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.823265 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-sb\") pod \"d9beecee-59b4-475a-bcb2-9c37360314f8\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.823300 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-dns-svc\") pod \"d9beecee-59b4-475a-bcb2-9c37360314f8\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.823347 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-config\") pod \"d9beecee-59b4-475a-bcb2-9c37360314f8\" (UID: \"d9beecee-59b4-475a-bcb2-9c37360314f8\") " Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.828854 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9beecee-59b4-475a-bcb2-9c37360314f8-kube-api-access-2rdrp" (OuterVolumeSpecName: "kube-api-access-2rdrp") pod "d9beecee-59b4-475a-bcb2-9c37360314f8" (UID: "d9beecee-59b4-475a-bcb2-9c37360314f8"). InnerVolumeSpecName "kube-api-access-2rdrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.844015 4889 scope.go:117] "RemoveContainer" containerID="ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a" Nov 28 07:07:46 crc kubenswrapper[4889]: E1128 07:07:45.847356 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a\": container with ID starting with ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a not found: ID does not exist" containerID="ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.847427 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a"} err="failed to get container status \"ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a\": rpc error: code = NotFound desc = could not find container \"ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a\": container with ID starting with ca57b1a1f4671260f95d4364a311a968a9cb38e40772293a65e3e2442d121e0a not found: ID does not exist" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.847477 4889 scope.go:117] "RemoveContainer" containerID="12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9" Nov 28 07:07:46 crc kubenswrapper[4889]: E1128 07:07:45.849518 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9\": container with ID starting with 12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9 not found: ID does not exist" containerID="12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.849539 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9"} err="failed to get container status \"12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9\": rpc error: code = NotFound desc = could not find container \"12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9\": container with ID starting with 12bc14b3499dbb38a391bbc4b386f08c759dde8ada1b501e4c925abca42571d9 not found: ID does not exist" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.866162 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9beecee-59b4-475a-bcb2-9c37360314f8" (UID: "d9beecee-59b4-475a-bcb2-9c37360314f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.873175 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-config" (OuterVolumeSpecName: "config") pod "d9beecee-59b4-475a-bcb2-9c37360314f8" (UID: "d9beecee-59b4-475a-bcb2-9c37360314f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.874681 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9beecee-59b4-475a-bcb2-9c37360314f8" (UID: "d9beecee-59b4-475a-bcb2-9c37360314f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.886596 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9beecee-59b4-475a-bcb2-9c37360314f8" (UID: "d9beecee-59b4-475a-bcb2-9c37360314f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.925830 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.925881 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rdrp\" (UniqueName: \"kubernetes.io/projected/d9beecee-59b4-475a-bcb2-9c37360314f8-kube-api-access-2rdrp\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.925892 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.925901 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:45.925912 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9beecee-59b4-475a-bcb2-9c37360314f8-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:46.141595 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f6d79597f-rtnp6"] Nov 28 07:07:46 crc kubenswrapper[4889]: I1128 07:07:46.159318 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f6d79597f-rtnp6"] Nov 28 07:07:47 crc kubenswrapper[4889]: I1128 07:07:47.341982 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9beecee-59b4-475a-bcb2-9c37360314f8" path="/var/lib/kubelet/pods/d9beecee-59b4-475a-bcb2-9c37360314f8/volumes" Nov 28 07:07:47 crc kubenswrapper[4889]: I1128 07:07:47.809801 4889 generic.go:334] "Generic (PLEG): container finished" podID="38118581-7a75-43c6-82b5-cbcf739b47b8" containerID="ca8b62caf3e2fcb8383263600f9166e50b9e8e2684b835083a8ce3701a719aa2" exitCode=0 Nov 28 07:07:47 crc kubenswrapper[4889]: I1128 07:07:47.809870 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r6j84" event={"ID":"38118581-7a75-43c6-82b5-cbcf739b47b8","Type":"ContainerDied","Data":"ca8b62caf3e2fcb8383263600f9166e50b9e8e2684b835083a8ce3701a719aa2"} Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.106802 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.283980 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-config-data\") pod \"38118581-7a75-43c6-82b5-cbcf739b47b8\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.284037 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htv22\" (UniqueName: \"kubernetes.io/projected/38118581-7a75-43c6-82b5-cbcf739b47b8-kube-api-access-htv22\") pod \"38118581-7a75-43c6-82b5-cbcf739b47b8\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.284175 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-combined-ca-bundle\") pod \"38118581-7a75-43c6-82b5-cbcf739b47b8\" (UID: \"38118581-7a75-43c6-82b5-cbcf739b47b8\") " Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.289337 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38118581-7a75-43c6-82b5-cbcf739b47b8-kube-api-access-htv22" (OuterVolumeSpecName: "kube-api-access-htv22") pod "38118581-7a75-43c6-82b5-cbcf739b47b8" (UID: "38118581-7a75-43c6-82b5-cbcf739b47b8"). InnerVolumeSpecName "kube-api-access-htv22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.308081 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38118581-7a75-43c6-82b5-cbcf739b47b8" (UID: "38118581-7a75-43c6-82b5-cbcf739b47b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.329766 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-config-data" (OuterVolumeSpecName: "config-data") pod "38118581-7a75-43c6-82b5-cbcf739b47b8" (UID: "38118581-7a75-43c6-82b5-cbcf739b47b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.386418 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htv22\" (UniqueName: \"kubernetes.io/projected/38118581-7a75-43c6-82b5-cbcf739b47b8-kube-api-access-htv22\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.386664 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.386676 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38118581-7a75-43c6-82b5-cbcf739b47b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.825475 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r6j84" event={"ID":"38118581-7a75-43c6-82b5-cbcf739b47b8","Type":"ContainerDied","Data":"d9440e180e0f67ccdf7c5720b9d277e0f45903fe9b77694e98a9091f43eb8b85"} Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.825512 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9440e180e0f67ccdf7c5720b9d277e0f45903fe9b77694e98a9091f43eb8b85" Nov 28 07:07:49 crc kubenswrapper[4889]: I1128 07:07:49.825549 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r6j84" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.135667 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-twr6g"] Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136169 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aee8230-fc0c-4f50-a805-23331b345013" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136185 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aee8230-fc0c-4f50-a805-23331b345013" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136205 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38118581-7a75-43c6-82b5-cbcf739b47b8" containerName="keystone-db-sync" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136211 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="38118581-7a75-43c6-82b5-cbcf739b47b8" containerName="keystone-db-sync" Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136229 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2b78a6-88b2-4a68-86e4-a5e07ac24456" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136237 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2b78a6-88b2-4a68-86e4-a5e07ac24456" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136247 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac655aad-d9b7-47d0-b4b6-5f8904f5b925" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136254 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac655aad-d9b7-47d0-b4b6-5f8904f5b925" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136265 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9beecee-59b4-475a-bcb2-9c37360314f8" containerName="init" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136271 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9beecee-59b4-475a-bcb2-9c37360314f8" containerName="init" Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136285 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74cd0e57-f855-4399-b02c-a8740d0e31b7" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136291 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cd0e57-f855-4399-b02c-a8740d0e31b7" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136302 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e500e0f-f629-484f-b1c5-2e1b254bcee4" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136308 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e500e0f-f629-484f-b1c5-2e1b254bcee4" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136326 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c6935e-abd3-4da3-aa74-662c45289641" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136334 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c6935e-abd3-4da3-aa74-662c45289641" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: E1128 07:07:50.136345 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9beecee-59b4-475a-bcb2-9c37360314f8" containerName="dnsmasq-dns" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136353 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9beecee-59b4-475a-bcb2-9c37360314f8" containerName="dnsmasq-dns" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136540 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aee8230-fc0c-4f50-a805-23331b345013" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136568 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c6935e-abd3-4da3-aa74-662c45289641" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136581 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac655aad-d9b7-47d0-b4b6-5f8904f5b925" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136592 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e500e0f-f629-484f-b1c5-2e1b254bcee4" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136604 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="38118581-7a75-43c6-82b5-cbcf739b47b8" containerName="keystone-db-sync" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136616 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="74cd0e57-f855-4399-b02c-a8740d0e31b7" containerName="mariadb-database-create" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136626 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2b78a6-88b2-4a68-86e4-a5e07ac24456" containerName="mariadb-account-create-update" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.136639 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9beecee-59b4-475a-bcb2-9c37360314f8" containerName="dnsmasq-dns" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.137349 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.143173 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.143213 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.143236 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.143209 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.143488 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j7bgn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.143899 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc74c5c67-wqfjn"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.145331 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.187035 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc74c5c67-wqfjn"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.215314 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-twr6g"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301174 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdf8\" (UniqueName: \"kubernetes.io/projected/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-kube-api-access-stdf8\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301229 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301416 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301490 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l6qr\" (UniqueName: \"kubernetes.io/projected/012afe31-0331-4b3a-957f-195a20a27bb9-kube-api-access-9l6qr\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301555 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-config\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301608 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301725 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-svc\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301769 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-fernet-keys\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301827 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-credential-keys\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.301975 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-config-data\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.302048 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-combined-ca-bundle\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.302098 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-scripts\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.327109 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-66vh9"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.328483 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.333441 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.334094 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7sw2k" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.334723 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.365906 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-66vh9"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.389321 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-q5vz8"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.392929 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.399919 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.400072 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9qf8k" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.403910 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-svc\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.403949 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-fernet-keys\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.403977 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-credential-keys\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404015 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-config-data\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404040 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-combined-ca-bundle\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404060 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-scripts\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404092 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stdf8\" (UniqueName: \"kubernetes.io/projected/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-kube-api-access-stdf8\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404123 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404157 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404178 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l6qr\" (UniqueName: \"kubernetes.io/projected/012afe31-0331-4b3a-957f-195a20a27bb9-kube-api-access-9l6qr\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404195 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-config\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404215 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.404674 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.405227 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.406058 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.406577 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.407368 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-config\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.409552 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-svc\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.410199 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-scripts\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.421272 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-q5vz8"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.422252 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-fernet-keys\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.422280 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-config-data\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.429892 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-combined-ca-bundle\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.435765 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-credential-keys\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.445214 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l6qr\" (UniqueName: \"kubernetes.io/projected/012afe31-0331-4b3a-957f-195a20a27bb9-kube-api-access-9l6qr\") pod \"dnsmasq-dns-5dc74c5c67-wqfjn\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.461306 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stdf8\" (UniqueName: \"kubernetes.io/projected/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-kube-api-access-stdf8\") pod \"keystone-bootstrap-twr6g\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.471553 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.503162 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6spsr"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.504569 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505146 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-combined-ca-bundle\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505191 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-config-data\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505266 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87kfw\" (UniqueName: \"kubernetes.io/projected/30f08826-4d6a-453d-8681-52d2446a5918-kube-api-access-87kfw\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505290 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rqk\" (UniqueName: \"kubernetes.io/projected/76a51e5e-b005-4d01-b0a3-86f27d671c32-kube-api-access-v8rqk\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505328 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-scripts\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505370 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-config\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505427 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-db-sync-config-data\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505467 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-combined-ca-bundle\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.505503 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76a51e5e-b005-4d01-b0a3-86f27d671c32-etc-machine-id\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.517429 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.517614 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.518214 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fcmpn" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.541831 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-578vb"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.543192 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.549467 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.549675 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s5g4w" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.552925 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc74c5c67-wqfjn"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.569910 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-578vb"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.606611 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-db-sync-config-data\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.606687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-combined-ca-bundle\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.606724 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76a51e5e-b005-4d01-b0a3-86f27d671c32-etc-machine-id\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.606746 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-scripts\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.606772 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-combined-ca-bundle\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.606790 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-config-data\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.606824 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-combined-ca-bundle\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.606845 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-config-data\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.609146 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87kfw\" (UniqueName: \"kubernetes.io/projected/30f08826-4d6a-453d-8681-52d2446a5918-kube-api-access-87kfw\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.609249 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76a51e5e-b005-4d01-b0a3-86f27d671c32-etc-machine-id\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.609319 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rqk\" (UniqueName: \"kubernetes.io/projected/76a51e5e-b005-4d01-b0a3-86f27d671c32-kube-api-access-v8rqk\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.609362 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87c4c\" (UniqueName: \"kubernetes.io/projected/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-kube-api-access-87c4c\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.609396 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-scripts\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.609490 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-logs\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.609546 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-config\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.625405 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-scripts\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.625544 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-combined-ca-bundle\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.629692 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-combined-ca-bundle\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.630888 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-config-data\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.633644 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-db-sync-config-data\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.640281 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6spsr"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.650488 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-config\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.652353 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rqk\" (UniqueName: \"kubernetes.io/projected/76a51e5e-b005-4d01-b0a3-86f27d671c32-kube-api-access-v8rqk\") pod \"cinder-db-sync-q5vz8\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.654958 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87kfw\" (UniqueName: \"kubernetes.io/projected/30f08826-4d6a-453d-8681-52d2446a5918-kube-api-access-87kfw\") pod \"neutron-db-sync-66vh9\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.708968 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.734940 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.743366 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87c4c\" (UniqueName: \"kubernetes.io/projected/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-kube-api-access-87c4c\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.743473 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-logs\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.743521 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zh9n\" (UniqueName: \"kubernetes.io/projected/cb878697-faf4-4e49-9d9c-54f02215856b-kube-api-access-4zh9n\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.743571 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-db-sync-config-data\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.743634 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-combined-ca-bundle\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.743745 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-scripts\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.743815 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-combined-ca-bundle\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.748210 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-logs\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.756307 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-config-data\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.756740 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-scripts\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.756940 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.756949 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.760501 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.773225 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-combined-ca-bundle\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.774246 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87c4c\" (UniqueName: \"kubernetes.io/projected/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-kube-api-access-87c4c\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.788845 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-config-data\") pod \"placement-db-sync-6spsr\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.789183 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67965bf7bf-tj296"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.791611 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.811661 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67965bf7bf-tj296"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.819438 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860387 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860452 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-nb\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860482 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glsgd\" (UniqueName: \"kubernetes.io/projected/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-kube-api-access-glsgd\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860533 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-combined-ca-bundle\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860561 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-run-httpd\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860596 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860621 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-scripts\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860677 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-config-data\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860722 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-config\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860780 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-log-httpd\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860835 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-swift-storage-0\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860856 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7v2h\" (UniqueName: \"kubernetes.io/projected/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-kube-api-access-n7v2h\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860880 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-svc\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860917 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zh9n\" (UniqueName: \"kubernetes.io/projected/cb878697-faf4-4e49-9d9c-54f02215856b-kube-api-access-4zh9n\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860952 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-sb\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.860988 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-db-sync-config-data\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.865631 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-db-sync-config-data\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.870288 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-combined-ca-bundle\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.874111 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.878142 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6spsr" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.883136 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zh9n\" (UniqueName: \"kubernetes.io/projected/cb878697-faf4-4e49-9d9c-54f02215856b-kube-api-access-4zh9n\") pod \"barbican-db-sync-578vb\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.891995 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-578vb" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.951311 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-66vh9" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982315 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-config-data\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982355 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-config\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982428 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-log-httpd\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982475 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-swift-storage-0\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982493 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7v2h\" (UniqueName: \"kubernetes.io/projected/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-kube-api-access-n7v2h\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982512 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-svc\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982540 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-sb\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982613 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982637 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-nb\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982661 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glsgd\" (UniqueName: \"kubernetes.io/projected/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-kube-api-access-glsgd\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982702 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-run-httpd\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982740 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.982764 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-scripts\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.983463 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-log-httpd\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.983838 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-run-httpd\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.984580 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-nb\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.984626 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-swift-storage-0\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.984655 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-svc\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.985386 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-config\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.986816 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-sb\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.987500 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.987609 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-config-data\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.988144 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-scripts\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:50 crc kubenswrapper[4889]: I1128 07:07:50.991522 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.008290 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7v2h\" (UniqueName: \"kubernetes.io/projected/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-kube-api-access-n7v2h\") pod \"ceilometer-0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " pod="openstack/ceilometer-0" Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.011862 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glsgd\" (UniqueName: \"kubernetes.io/projected/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-kube-api-access-glsgd\") pod \"dnsmasq-dns-67965bf7bf-tj296\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.079834 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.131989 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.250261 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc74c5c67-wqfjn"] Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.527746 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-twr6g"] Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.534067 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6spsr"] Nov 28 07:07:51 crc kubenswrapper[4889]: W1128 07:07:51.536177 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92d2f4ae_5c83_4c28_8fb3_ae37342d4fbe.slice/crio-14fe2f14d01e52104bbe57590c158c0b759541ecea588d4b2c89ad6cbaa3f0fc WatchSource:0}: Error finding container 14fe2f14d01e52104bbe57590c158c0b759541ecea588d4b2c89ad6cbaa3f0fc: Status 404 returned error can't find the container with id 14fe2f14d01e52104bbe57590c158c0b759541ecea588d4b2c89ad6cbaa3f0fc Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.541202 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-q5vz8"] Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.662538 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-66vh9"] Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.759976 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.772783 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-578vb"] Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.783450 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67965bf7bf-tj296"] Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.850593 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerStarted","Data":"4f9e2df9cb3a549cfc1d0193450f32bccecf57514347f90799c39656ec4bbb8b"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.852496 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-66vh9" event={"ID":"30f08826-4d6a-453d-8681-52d2446a5918","Type":"ContainerStarted","Data":"26336e48f1946ac8dea2e47b191c50eacf4012fefe2be158dd127f4d92fc9732"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.853509 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q5vz8" event={"ID":"76a51e5e-b005-4d01-b0a3-86f27d671c32","Type":"ContainerStarted","Data":"cce8ffe603b9aac8cbf56eb5fa8be362c7a6b6764f5496d532dba1dde720312c"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.855094 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" event={"ID":"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438","Type":"ContainerStarted","Data":"001949ea45d7fd55fb6ee45312dd1e6ea62761cf8d016844abf7b041c7613b34"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.856798 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-twr6g" event={"ID":"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe","Type":"ContainerStarted","Data":"14fe2f14d01e52104bbe57590c158c0b759541ecea588d4b2c89ad6cbaa3f0fc"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.859777 4889 generic.go:334] "Generic (PLEG): container finished" podID="012afe31-0331-4b3a-957f-195a20a27bb9" containerID="0fa4c1055c45794f6ece308f73b7130ed4fcd91f6ed3f4c700567726b4141ecf" exitCode=0 Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.859985 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" event={"ID":"012afe31-0331-4b3a-957f-195a20a27bb9","Type":"ContainerDied","Data":"0fa4c1055c45794f6ece308f73b7130ed4fcd91f6ed3f4c700567726b4141ecf"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.860020 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" event={"ID":"012afe31-0331-4b3a-957f-195a20a27bb9","Type":"ContainerStarted","Data":"1e73e005be8b789044753e28363bca1dd91d4343072b6f6b2fea1903b9a072ab"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.862560 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6spsr" event={"ID":"851c4202-ebf1-44df-97d1-4c9b9bfd1fba","Type":"ContainerStarted","Data":"204ee9a156d134e1b1e1c7a495070ac446f9f6e0648fd25a5b56f95a357a400a"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.864082 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-578vb" event={"ID":"cb878697-faf4-4e49-9d9c-54f02215856b","Type":"ContainerStarted","Data":"d76b1485721804a84995fac033c230daf341ff8ac6e3079f5a43d07c1ad40265"} Nov 28 07:07:51 crc kubenswrapper[4889]: I1128 07:07:51.889046 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-twr6g" podStartSLOduration=1.889020991 podStartE2EDuration="1.889020991s" podCreationTimestamp="2025-11-28 07:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:51.88051155 +0000 UTC m=+1194.850745715" watchObservedRunningTime="2025-11-28 07:07:51.889020991 +0000 UTC m=+1194.859255146" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.245909 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.344566 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-svc\") pod \"012afe31-0331-4b3a-957f-195a20a27bb9\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.344679 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-sb\") pod \"012afe31-0331-4b3a-957f-195a20a27bb9\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.344887 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-swift-storage-0\") pod \"012afe31-0331-4b3a-957f-195a20a27bb9\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.344959 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-nb\") pod \"012afe31-0331-4b3a-957f-195a20a27bb9\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.344997 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-config\") pod \"012afe31-0331-4b3a-957f-195a20a27bb9\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.345057 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l6qr\" (UniqueName: \"kubernetes.io/projected/012afe31-0331-4b3a-957f-195a20a27bb9-kube-api-access-9l6qr\") pod \"012afe31-0331-4b3a-957f-195a20a27bb9\" (UID: \"012afe31-0331-4b3a-957f-195a20a27bb9\") " Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.360196 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012afe31-0331-4b3a-957f-195a20a27bb9-kube-api-access-9l6qr" (OuterVolumeSpecName: "kube-api-access-9l6qr") pod "012afe31-0331-4b3a-957f-195a20a27bb9" (UID: "012afe31-0331-4b3a-957f-195a20a27bb9"). InnerVolumeSpecName "kube-api-access-9l6qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.371762 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "012afe31-0331-4b3a-957f-195a20a27bb9" (UID: "012afe31-0331-4b3a-957f-195a20a27bb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.376130 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "012afe31-0331-4b3a-957f-195a20a27bb9" (UID: "012afe31-0331-4b3a-957f-195a20a27bb9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.376846 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "012afe31-0331-4b3a-957f-195a20a27bb9" (UID: "012afe31-0331-4b3a-957f-195a20a27bb9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.383193 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "012afe31-0331-4b3a-957f-195a20a27bb9" (UID: "012afe31-0331-4b3a-957f-195a20a27bb9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.398549 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-config" (OuterVolumeSpecName: "config") pod "012afe31-0331-4b3a-957f-195a20a27bb9" (UID: "012afe31-0331-4b3a-957f-195a20a27bb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.449005 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l6qr\" (UniqueName: \"kubernetes.io/projected/012afe31-0331-4b3a-957f-195a20a27bb9-kube-api-access-9l6qr\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.449037 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.449047 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.449057 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.449065 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.449073 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012afe31-0331-4b3a-957f-195a20a27bb9-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.889838 4889 generic.go:334] "Generic (PLEG): container finished" podID="8d6cc417-c977-4f6e-8e9c-b420b524d3d5" containerID="23f7858730740d33a3badd825c56dd3177715fcfdc6cdc319dbc0205c639dc27" exitCode=0 Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.889924 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8cb4f" event={"ID":"8d6cc417-c977-4f6e-8e9c-b420b524d3d5","Type":"ContainerDied","Data":"23f7858730740d33a3badd825c56dd3177715fcfdc6cdc319dbc0205c639dc27"} Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.899521 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-66vh9" event={"ID":"30f08826-4d6a-453d-8681-52d2446a5918","Type":"ContainerStarted","Data":"1e9eae91f17d3ffa4da9b7b6996803051af34401caec34a002b4fcace79e9594"} Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.904507 4889 generic.go:334] "Generic (PLEG): container finished" podID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerID="8393e2ae1bbb845f0907bcdf77ac1e4953e5eb983d00ef75c7cb2ba5a9a0517f" exitCode=0 Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.904671 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" event={"ID":"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438","Type":"ContainerDied","Data":"8393e2ae1bbb845f0907bcdf77ac1e4953e5eb983d00ef75c7cb2ba5a9a0517f"} Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.919173 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-twr6g" event={"ID":"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe","Type":"ContainerStarted","Data":"55fe50cb71e61d63b4658594aa014b55e2caac30d4a3cfbeb4b318e6d0f5877b"} Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.960579 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" event={"ID":"012afe31-0331-4b3a-957f-195a20a27bb9","Type":"ContainerDied","Data":"1e73e005be8b789044753e28363bca1dd91d4343072b6f6b2fea1903b9a072ab"} Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.960633 4889 scope.go:117] "RemoveContainer" containerID="0fa4c1055c45794f6ece308f73b7130ed4fcd91f6ed3f4c700567726b4141ecf" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.960823 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc74c5c67-wqfjn" Nov 28 07:07:52 crc kubenswrapper[4889]: I1128 07:07:52.970500 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-66vh9" podStartSLOduration=2.970481431 podStartE2EDuration="2.970481431s" podCreationTimestamp="2025-11-28 07:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:52.965930329 +0000 UTC m=+1195.936164484" watchObservedRunningTime="2025-11-28 07:07:52.970481431 +0000 UTC m=+1195.940715586" Nov 28 07:07:53 crc kubenswrapper[4889]: I1128 07:07:53.164831 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc74c5c67-wqfjn"] Nov 28 07:07:53 crc kubenswrapper[4889]: I1128 07:07:53.179946 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc74c5c67-wqfjn"] Nov 28 07:07:53 crc kubenswrapper[4889]: I1128 07:07:53.356103 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012afe31-0331-4b3a-957f-195a20a27bb9" path="/var/lib/kubelet/pods/012afe31-0331-4b3a-957f-195a20a27bb9/volumes" Nov 28 07:07:53 crc kubenswrapper[4889]: I1128 07:07:53.412497 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:07:53 crc kubenswrapper[4889]: I1128 07:07:53.983490 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" event={"ID":"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438","Type":"ContainerStarted","Data":"773fe4a57f660c24c1868fc4c633835e12a8003b5a9a336c84f9223797c8f036"} Nov 28 07:07:53 crc kubenswrapper[4889]: I1128 07:07:53.983955 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.021393 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" podStartSLOduration=4.021370714 podStartE2EDuration="4.021370714s" podCreationTimestamp="2025-11-28 07:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:54.015481702 +0000 UTC m=+1196.985715877" watchObservedRunningTime="2025-11-28 07:07:54.021370714 +0000 UTC m=+1196.991604869" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.636958 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.794843 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-combined-ca-bundle\") pod \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.794999 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcwd9\" (UniqueName: \"kubernetes.io/projected/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-kube-api-access-rcwd9\") pod \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.795027 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-db-sync-config-data\") pod \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.795148 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-config-data\") pod \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\" (UID: \"8d6cc417-c977-4f6e-8e9c-b420b524d3d5\") " Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.835010 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8d6cc417-c977-4f6e-8e9c-b420b524d3d5" (UID: "8d6cc417-c977-4f6e-8e9c-b420b524d3d5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.838485 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-kube-api-access-rcwd9" (OuterVolumeSpecName: "kube-api-access-rcwd9") pod "8d6cc417-c977-4f6e-8e9c-b420b524d3d5" (UID: "8d6cc417-c977-4f6e-8e9c-b420b524d3d5"). InnerVolumeSpecName "kube-api-access-rcwd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.840577 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d6cc417-c977-4f6e-8e9c-b420b524d3d5" (UID: "8d6cc417-c977-4f6e-8e9c-b420b524d3d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.859365 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-config-data" (OuterVolumeSpecName: "config-data") pod "8d6cc417-c977-4f6e-8e9c-b420b524d3d5" (UID: "8d6cc417-c977-4f6e-8e9c-b420b524d3d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.897287 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.897325 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.897337 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcwd9\" (UniqueName: \"kubernetes.io/projected/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-kube-api-access-rcwd9\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.897348 4889 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d6cc417-c977-4f6e-8e9c-b420b524d3d5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.995928 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8cb4f" event={"ID":"8d6cc417-c977-4f6e-8e9c-b420b524d3d5","Type":"ContainerDied","Data":"f815884490266dc60f7e8356b5fdc23bb7456a3fe28428700de859ec6cca6907"} Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.995983 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f815884490266dc60f7e8356b5fdc23bb7456a3fe28428700de859ec6cca6907" Nov 28 07:07:54 crc kubenswrapper[4889]: I1128 07:07:54.995958 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8cb4f" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.244983 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67965bf7bf-tj296"] Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.292623 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f645789c-7wjjp"] Nov 28 07:07:55 crc kubenswrapper[4889]: E1128 07:07:55.293177 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012afe31-0331-4b3a-957f-195a20a27bb9" containerName="init" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.293211 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="012afe31-0331-4b3a-957f-195a20a27bb9" containerName="init" Nov 28 07:07:55 crc kubenswrapper[4889]: E1128 07:07:55.293244 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6cc417-c977-4f6e-8e9c-b420b524d3d5" containerName="glance-db-sync" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.293251 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6cc417-c977-4f6e-8e9c-b420b524d3d5" containerName="glance-db-sync" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.293529 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6cc417-c977-4f6e-8e9c-b420b524d3d5" containerName="glance-db-sync" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.293564 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="012afe31-0331-4b3a-957f-195a20a27bb9" containerName="init" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.299111 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.313584 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f645789c-7wjjp"] Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.408167 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhw7r\" (UniqueName: \"kubernetes.io/projected/d7499407-c822-4002-8f62-d423b29d39ab-kube-api-access-hhw7r\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.408253 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-nb\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.408369 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-sb\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.408593 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-swift-storage-0\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.408720 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-svc\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.408754 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-config\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.510912 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-nb\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.511299 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-sb\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.511446 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-swift-storage-0\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.511538 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-svc\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.511613 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-config\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.512071 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhw7r\" (UniqueName: \"kubernetes.io/projected/d7499407-c822-4002-8f62-d423b29d39ab-kube-api-access-hhw7r\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.512982 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-swift-storage-0\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.513138 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-config\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.513145 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-sb\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.513389 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-nb\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.513541 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-svc\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.532156 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhw7r\" (UniqueName: \"kubernetes.io/projected/d7499407-c822-4002-8f62-d423b29d39ab-kube-api-access-hhw7r\") pod \"dnsmasq-dns-55f645789c-7wjjp\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:55 crc kubenswrapper[4889]: I1128 07:07:55.642412 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.005673 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" podUID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerName="dnsmasq-dns" containerID="cri-o://773fe4a57f660c24c1868fc4c633835e12a8003b5a9a336c84f9223797c8f036" gracePeriod=10 Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.266280 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.268246 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.271308 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.271660 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.271856 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sp86l" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.280536 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.378122 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.379681 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.384164 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.386371 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.430733 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.430821 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.430939 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.431012 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.431054 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.431095 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-logs\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.431150 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2qk7\" (UniqueName: \"kubernetes.io/projected/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-kube-api-access-w2qk7\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532160 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532214 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532232 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-logs\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532252 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-logs\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532272 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2qk7\" (UniqueName: \"kubernetes.io/projected/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-kube-api-access-w2qk7\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532311 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532341 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxggs\" (UniqueName: \"kubernetes.io/projected/64a68c3f-d267-44b8-b32c-c7f1579df495-kube-api-access-cxggs\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532384 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532403 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532434 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-config-data\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532459 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-scripts\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532497 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532518 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.532559 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.533269 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-logs\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.533447 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.533805 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.541408 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.544889 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.558679 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.559221 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2qk7\" (UniqueName: \"kubernetes.io/projected/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-kube-api-access-w2qk7\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.568792 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.589820 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.634256 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxggs\" (UniqueName: \"kubernetes.io/projected/64a68c3f-d267-44b8-b32c-c7f1579df495-kube-api-access-cxggs\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.634327 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.634360 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-config-data\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.634377 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-scripts\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.634408 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.634478 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.634499 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-logs\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.634924 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-logs\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.637016 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.637211 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.639016 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-scripts\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.639344 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-config-data\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.640495 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.659297 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxggs\" (UniqueName: \"kubernetes.io/projected/64a68c3f-d267-44b8-b32c-c7f1579df495-kube-api-access-cxggs\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.662829 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:07:56 crc kubenswrapper[4889]: I1128 07:07:56.741437 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:07:57 crc kubenswrapper[4889]: I1128 07:07:57.016745 4889 generic.go:334] "Generic (PLEG): container finished" podID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerID="773fe4a57f660c24c1868fc4c633835e12a8003b5a9a336c84f9223797c8f036" exitCode=0 Nov 28 07:07:57 crc kubenswrapper[4889]: I1128 07:07:57.016809 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" event={"ID":"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438","Type":"ContainerDied","Data":"773fe4a57f660c24c1868fc4c633835e12a8003b5a9a336c84f9223797c8f036"} Nov 28 07:07:58 crc kubenswrapper[4889]: I1128 07:07:58.030691 4889 generic.go:334] "Generic (PLEG): container finished" podID="92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" containerID="55fe50cb71e61d63b4658594aa014b55e2caac30d4a3cfbeb4b318e6d0f5877b" exitCode=0 Nov 28 07:07:58 crc kubenswrapper[4889]: I1128 07:07:58.030796 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-twr6g" event={"ID":"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe","Type":"ContainerDied","Data":"55fe50cb71e61d63b4658594aa014b55e2caac30d4a3cfbeb4b318e6d0f5877b"} Nov 28 07:07:58 crc kubenswrapper[4889]: I1128 07:07:58.782409 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:07:58 crc kubenswrapper[4889]: I1128 07:07:58.782478 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:08:00 crc kubenswrapper[4889]: I1128 07:08:00.738689 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:08:00 crc kubenswrapper[4889]: I1128 07:08:00.801885 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:08:02 crc kubenswrapper[4889]: I1128 07:08:02.915661 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:08:02 crc kubenswrapper[4889]: I1128 07:08:02.923061 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.074866 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-config\") pod \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075211 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-credential-keys\") pod \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075263 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-config-data\") pod \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075288 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-fernet-keys\") pod \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075344 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-combined-ca-bundle\") pod \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075387 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-nb\") pod \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075470 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-sb\") pod \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075501 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-swift-storage-0\") pod \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075526 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glsgd\" (UniqueName: \"kubernetes.io/projected/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-kube-api-access-glsgd\") pod \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075547 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-svc\") pod \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\" (UID: \"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075582 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stdf8\" (UniqueName: \"kubernetes.io/projected/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-kube-api-access-stdf8\") pod \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.075599 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-scripts\") pod \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\" (UID: \"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe\") " Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.082945 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" (UID: "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.084231 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" (UID: "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.085753 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-kube-api-access-stdf8" (OuterVolumeSpecName: "kube-api-access-stdf8") pod "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" (UID: "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe"). InnerVolumeSpecName "kube-api-access-stdf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.086608 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" event={"ID":"dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438","Type":"ContainerDied","Data":"001949ea45d7fd55fb6ee45312dd1e6ea62761cf8d016844abf7b041c7613b34"} Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.086645 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.086670 4889 scope.go:117] "RemoveContainer" containerID="773fe4a57f660c24c1868fc4c633835e12a8003b5a9a336c84f9223797c8f036" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.089235 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-twr6g" event={"ID":"92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe","Type":"ContainerDied","Data":"14fe2f14d01e52104bbe57590c158c0b759541ecea588d4b2c89ad6cbaa3f0fc"} Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.089270 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14fe2f14d01e52104bbe57590c158c0b759541ecea588d4b2c89ad6cbaa3f0fc" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.089284 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-twr6g" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.093006 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-scripts" (OuterVolumeSpecName: "scripts") pod "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" (UID: "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.095123 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-kube-api-access-glsgd" (OuterVolumeSpecName: "kube-api-access-glsgd") pod "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" (UID: "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438"). InnerVolumeSpecName "kube-api-access-glsgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.104614 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-config-data" (OuterVolumeSpecName: "config-data") pod "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" (UID: "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.131680 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" (UID: "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.131868 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" (UID: "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.132591 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" (UID: "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.135311 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" (UID: "92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.160045 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-config" (OuterVolumeSpecName: "config") pod "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" (UID: "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.172451 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" (UID: "dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.184849 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.184929 4889 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.184941 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.184959 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.184971 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.184984 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.184996 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glsgd\" (UniqueName: \"kubernetes.io/projected/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-kube-api-access-glsgd\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.185008 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.185018 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stdf8\" (UniqueName: \"kubernetes.io/projected/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-kube-api-access-stdf8\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.185031 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.185042 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.185051 4889 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.327181 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.410148 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67965bf7bf-tj296"] Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.417698 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67965bf7bf-tj296"] Nov 28 07:08:03 crc kubenswrapper[4889]: I1128 07:08:03.996045 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-twr6g"] Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.003943 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-twr6g"] Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.140610 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vvjhk"] Nov 28 07:08:04 crc kubenswrapper[4889]: E1128 07:08:04.141065 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerName="init" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.141088 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerName="init" Nov 28 07:08:04 crc kubenswrapper[4889]: E1128 07:08:04.141111 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerName="dnsmasq-dns" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.141120 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerName="dnsmasq-dns" Nov 28 07:08:04 crc kubenswrapper[4889]: E1128 07:08:04.141140 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" containerName="keystone-bootstrap" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.141148 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" containerName="keystone-bootstrap" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.141377 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" containerName="keystone-bootstrap" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.141421 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerName="dnsmasq-dns" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.142052 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.146746 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.147317 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j7bgn" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.147468 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.147585 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.148420 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.153209 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vvjhk"] Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.206159 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-combined-ca-bundle\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.206231 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-credential-keys\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.206256 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-fernet-keys\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.206391 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-config-data\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.206426 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-scripts\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.206489 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zpxn\" (UniqueName: \"kubernetes.io/projected/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-kube-api-access-4zpxn\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.307593 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-credential-keys\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.307639 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-fernet-keys\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.307675 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-config-data\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.307693 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-scripts\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.307741 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zpxn\" (UniqueName: \"kubernetes.io/projected/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-kube-api-access-4zpxn\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.307823 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-combined-ca-bundle\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.311823 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-credential-keys\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.312689 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-config-data\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.323360 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-scripts\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.323959 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-combined-ca-bundle\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.327227 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zpxn\" (UniqueName: \"kubernetes.io/projected/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-kube-api-access-4zpxn\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.328262 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-fernet-keys\") pod \"keystone-bootstrap-vvjhk\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:04 crc kubenswrapper[4889]: I1128 07:08:04.469016 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:05 crc kubenswrapper[4889]: I1128 07:08:05.359268 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe" path="/var/lib/kubelet/pods/92d2f4ae-5c83-4c28-8fb3-ae37342d4fbe/volumes" Nov 28 07:08:05 crc kubenswrapper[4889]: I1128 07:08:05.360204 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" path="/var/lib/kubelet/pods/dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438/volumes" Nov 28 07:08:06 crc kubenswrapper[4889]: I1128 07:08:06.133520 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67965bf7bf-tj296" podUID="dd2fe5f8-9c2e-4726-bee4-21f4d7dd9438" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Nov 28 07:08:10 crc kubenswrapper[4889]: W1128 07:08:10.685340 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4abefff5_85b7_4144_aa2c_9fd9cfd8b5d5.slice/crio-1858310747f781cddbd4482603edd5ff7de5d2e3cda971cb9874a8331f7da2e5 WatchSource:0}: Error finding container 1858310747f781cddbd4482603edd5ff7de5d2e3cda971cb9874a8331f7da2e5: Status 404 returned error can't find the container with id 1858310747f781cddbd4482603edd5ff7de5d2e3cda971cb9874a8331f7da2e5 Nov 28 07:08:10 crc kubenswrapper[4889]: E1128 07:08:10.686436 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:3a56b50437a0c9a9a7b30c10f5e43bbdb7d9a94b723c70d36f0b01ff545e00eb" Nov 28 07:08:10 crc kubenswrapper[4889]: E1128 07:08:10.686675 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:3a56b50437a0c9a9a7b30c10f5e43bbdb7d9a94b723c70d36f0b01ff545e00eb,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zh9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-578vb_openstack(cb878697-faf4-4e49-9d9c-54f02215856b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:08:10 crc kubenswrapper[4889]: E1128 07:08:10.687892 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-578vb" podUID="cb878697-faf4-4e49-9d9c-54f02215856b" Nov 28 07:08:10 crc kubenswrapper[4889]: I1128 07:08:10.738962 4889 scope.go:117] "RemoveContainer" containerID="8393e2ae1bbb845f0907bcdf77ac1e4953e5eb983d00ef75c7cb2ba5a9a0517f" Nov 28 07:08:11 crc kubenswrapper[4889]: I1128 07:08:11.134512 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f645789c-7wjjp"] Nov 28 07:08:11 crc kubenswrapper[4889]: I1128 07:08:11.153101 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5","Type":"ContainerStarted","Data":"1858310747f781cddbd4482603edd5ff7de5d2e3cda971cb9874a8331f7da2e5"} Nov 28 07:08:11 crc kubenswrapper[4889]: E1128 07:08:11.154104 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:3a56b50437a0c9a9a7b30c10f5e43bbdb7d9a94b723c70d36f0b01ff545e00eb\\\"\"" pod="openstack/barbican-db-sync-578vb" podUID="cb878697-faf4-4e49-9d9c-54f02215856b" Nov 28 07:08:12 crc kubenswrapper[4889]: W1128 07:08:12.556621 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7499407_c822_4002_8f62_d423b29d39ab.slice/crio-5d02fb46e0d25524880b239d68bb3740c462ac1c05e583a090cb325035fe49f5 WatchSource:0}: Error finding container 5d02fb46e0d25524880b239d68bb3740c462ac1c05e583a090cb325035fe49f5: Status 404 returned error can't find the container with id 5d02fb46e0d25524880b239d68bb3740c462ac1c05e583a090cb325035fe49f5 Nov 28 07:08:12 crc kubenswrapper[4889]: E1128 07:08:12.608882 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b5266c9a26766fce2b92f95dff52d362a760f7baf1474cdcb33bd68570e096c0" Nov 28 07:08:12 crc kubenswrapper[4889]: E1128 07:08:12.609331 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b5266c9a26766fce2b92f95dff52d362a760f7baf1474cdcb33bd68570e096c0,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8rqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-q5vz8_openstack(76a51e5e-b005-4d01-b0a3-86f27d671c32): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:08:12 crc kubenswrapper[4889]: E1128 07:08:12.610797 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-q5vz8" podUID="76a51e5e-b005-4d01-b0a3-86f27d671c32" Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.061560 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.086239 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vvjhk"] Nov 28 07:08:13 crc kubenswrapper[4889]: W1128 07:08:13.101075 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc1cb2bd_21c2_4c08_9ad4_3eb7a20ffc85.slice/crio-b71d00e6b0ae62ab31a73ea135491a7ff32208859fc2693d8177e8aa658f0686 WatchSource:0}: Error finding container b71d00e6b0ae62ab31a73ea135491a7ff32208859fc2693d8177e8aa658f0686: Status 404 returned error can't find the container with id b71d00e6b0ae62ab31a73ea135491a7ff32208859fc2693d8177e8aa658f0686 Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.173100 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"64a68c3f-d267-44b8-b32c-c7f1579df495","Type":"ContainerStarted","Data":"ef4968ca1324d4cdd224af66c206d7954d71e6a6f9e8003ac9dbe75b3316b1c1"} Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.174522 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6spsr" event={"ID":"851c4202-ebf1-44df-97d1-4c9b9bfd1fba","Type":"ContainerStarted","Data":"0012a61ddc1ba62cde6a8248b5fdd03f750088e3a8e50ea4c5912c04f1f3e624"} Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.178691 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vvjhk" event={"ID":"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85","Type":"ContainerStarted","Data":"b71d00e6b0ae62ab31a73ea135491a7ff32208859fc2693d8177e8aa658f0686"} Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.181278 4889 generic.go:334] "Generic (PLEG): container finished" podID="d7499407-c822-4002-8f62-d423b29d39ab" containerID="65bfe0fccfe2242969d58b76536a5c635e834e06f989b6ce604dddf0da785469" exitCode=0 Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.181357 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" event={"ID":"d7499407-c822-4002-8f62-d423b29d39ab","Type":"ContainerDied","Data":"65bfe0fccfe2242969d58b76536a5c635e834e06f989b6ce604dddf0da785469"} Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.181388 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" event={"ID":"d7499407-c822-4002-8f62-d423b29d39ab","Type":"ContainerStarted","Data":"5d02fb46e0d25524880b239d68bb3740c462ac1c05e583a090cb325035fe49f5"} Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.186850 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerStarted","Data":"0984b6f70d98fe9863232af5cccbd01580ad25285a24a1a8065176170211d4d0"} Nov 28 07:08:13 crc kubenswrapper[4889]: E1128 07:08:13.188825 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b5266c9a26766fce2b92f95dff52d362a760f7baf1474cdcb33bd68570e096c0\\\"\"" pod="openstack/cinder-db-sync-q5vz8" podUID="76a51e5e-b005-4d01-b0a3-86f27d671c32" Nov 28 07:08:13 crc kubenswrapper[4889]: I1128 07:08:13.200064 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6spsr" podStartSLOduration=3.973879056 podStartE2EDuration="23.200043791s" podCreationTimestamp="2025-11-28 07:07:50 +0000 UTC" firstStartedPulling="2025-11-28 07:07:51.542395356 +0000 UTC m=+1194.512629511" lastFinishedPulling="2025-11-28 07:08:10.768560091 +0000 UTC m=+1213.738794246" observedRunningTime="2025-11-28 07:08:13.193056254 +0000 UTC m=+1216.163290409" watchObservedRunningTime="2025-11-28 07:08:13.200043791 +0000 UTC m=+1216.170277946" Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.202568 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" event={"ID":"d7499407-c822-4002-8f62-d423b29d39ab","Type":"ContainerStarted","Data":"c6f45807a67961737d3da251c09401a7668cf8963ef7251d5407b3781910d3ef"} Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.203112 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.205893 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5","Type":"ContainerStarted","Data":"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3"} Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.205934 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5","Type":"ContainerStarted","Data":"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8"} Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.206002 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerName="glance-log" containerID="cri-o://3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8" gracePeriod=30 Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.206120 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerName="glance-httpd" containerID="cri-o://9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3" gracePeriod=30 Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.211075 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"64a68c3f-d267-44b8-b32c-c7f1579df495","Type":"ContainerStarted","Data":"2bffcc7be4324f61f9b0f367da15245bfa16e5ca9d34469a56244277b4d53278"} Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.219887 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vvjhk" event={"ID":"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85","Type":"ContainerStarted","Data":"41c5f3c12a42d9eb237eca5b78a8ea4b30fa7324f282831cbd489a0028d90df2"} Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.254514 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.254496354 podStartE2EDuration="19.254496354s" podCreationTimestamp="2025-11-28 07:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:14.248930518 +0000 UTC m=+1217.219164673" watchObservedRunningTime="2025-11-28 07:08:14.254496354 +0000 UTC m=+1217.224730509" Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.259654 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" podStartSLOduration=19.259637829 podStartE2EDuration="19.259637829s" podCreationTimestamp="2025-11-28 07:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:14.232062589 +0000 UTC m=+1217.202296744" watchObservedRunningTime="2025-11-28 07:08:14.259637829 +0000 UTC m=+1217.229871974" Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.272966 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vvjhk" podStartSLOduration=10.272947469 podStartE2EDuration="10.272947469s" podCreationTimestamp="2025-11-28 07:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:14.264938819 +0000 UTC m=+1217.235172984" watchObservedRunningTime="2025-11-28 07:08:14.272947469 +0000 UTC m=+1217.243181624" Nov 28 07:08:14 crc kubenswrapper[4889]: I1128 07:08:14.965425 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.109910 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-scripts\") pod \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.109975 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-httpd-run\") pod \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.110051 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-logs\") pod \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.110085 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2qk7\" (UniqueName: \"kubernetes.io/projected/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-kube-api-access-w2qk7\") pod \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.110102 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-config-data\") pod \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.110197 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-combined-ca-bundle\") pod \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.110226 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\" (UID: \"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5\") " Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.111578 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-logs" (OuterVolumeSpecName: "logs") pod "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" (UID: "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.111822 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" (UID: "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.116126 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-scripts" (OuterVolumeSpecName: "scripts") pod "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" (UID: "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.116374 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" (UID: "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.116584 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-kube-api-access-w2qk7" (OuterVolumeSpecName: "kube-api-access-w2qk7") pod "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" (UID: "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5"). InnerVolumeSpecName "kube-api-access-w2qk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.135352 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" (UID: "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.156179 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-config-data" (OuterVolumeSpecName: "config-data") pod "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" (UID: "4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.212571 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.212622 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.212634 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.212643 4889 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.212651 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.212659 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2qk7\" (UniqueName: \"kubernetes.io/projected/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-kube-api-access-w2qk7\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.212674 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.231814 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerStarted","Data":"9497167b757f1ed2baafd4074c4a1ff36ec970c47ae28055165b5a066f9555c0"} Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.232861 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.234206 4889 generic.go:334] "Generic (PLEG): container finished" podID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerID="9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3" exitCode=0 Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.234231 4889 generic.go:334] "Generic (PLEG): container finished" podID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerID="3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8" exitCode=143 Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.234284 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5","Type":"ContainerDied","Data":"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3"} Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.234315 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5","Type":"ContainerDied","Data":"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8"} Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.234328 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5","Type":"ContainerDied","Data":"1858310747f781cddbd4482603edd5ff7de5d2e3cda971cb9874a8331f7da2e5"} Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.234346 4889 scope.go:117] "RemoveContainer" containerID="9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.234475 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.240093 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"64a68c3f-d267-44b8-b32c-c7f1579df495","Type":"ContainerStarted","Data":"70e1c406d0fcf5e551ec47f18b1c082d733686196c970db748909cbca78472cc"} Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.240286 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerName="glance-log" containerID="cri-o://2bffcc7be4324f61f9b0f367da15245bfa16e5ca9d34469a56244277b4d53278" gracePeriod=30 Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.240331 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerName="glance-httpd" containerID="cri-o://70e1c406d0fcf5e551ec47f18b1c082d733686196c970db748909cbca78472cc" gracePeriod=30 Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.243544 4889 generic.go:334] "Generic (PLEG): container finished" podID="851c4202-ebf1-44df-97d1-4c9b9bfd1fba" containerID="0012a61ddc1ba62cde6a8248b5fdd03f750088e3a8e50ea4c5912c04f1f3e624" exitCode=0 Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.243804 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6spsr" event={"ID":"851c4202-ebf1-44df-97d1-4c9b9bfd1fba","Type":"ContainerDied","Data":"0012a61ddc1ba62cde6a8248b5fdd03f750088e3a8e50ea4c5912c04f1f3e624"} Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.273044 4889 scope.go:117] "RemoveContainer" containerID="3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.287836 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=20.287815212 podStartE2EDuration="20.287815212s" podCreationTimestamp="2025-11-28 07:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:15.263594117 +0000 UTC m=+1218.233828272" watchObservedRunningTime="2025-11-28 07:08:15.287815212 +0000 UTC m=+1218.258049367" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.295167 4889 scope.go:117] "RemoveContainer" containerID="9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3" Nov 28 07:08:15 crc kubenswrapper[4889]: E1128 07:08:15.295528 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3\": container with ID starting with 9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3 not found: ID does not exist" containerID="9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.295568 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3"} err="failed to get container status \"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3\": rpc error: code = NotFound desc = could not find container \"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3\": container with ID starting with 9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3 not found: ID does not exist" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.295603 4889 scope.go:117] "RemoveContainer" containerID="3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8" Nov 28 07:08:15 crc kubenswrapper[4889]: E1128 07:08:15.295946 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8\": container with ID starting with 3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8 not found: ID does not exist" containerID="3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.295989 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8"} err="failed to get container status \"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8\": rpc error: code = NotFound desc = could not find container \"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8\": container with ID starting with 3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8 not found: ID does not exist" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.296017 4889 scope.go:117] "RemoveContainer" containerID="9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.296304 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3"} err="failed to get container status \"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3\": rpc error: code = NotFound desc = could not find container \"9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3\": container with ID starting with 9db59a0598c79e6aad443575db517bd6d00e9b8437de30f5c172cb9c3e57b8b3 not found: ID does not exist" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.296329 4889 scope.go:117] "RemoveContainer" containerID="3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.296617 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8"} err="failed to get container status \"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8\": rpc error: code = NotFound desc = could not find container \"3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8\": container with ID starting with 3ef683ffaa333d77503f88d1b311c83e92670c9e0689767c895c0316b04cd1c8 not found: ID does not exist" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.306680 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.314152 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.329666 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.373846 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" path="/var/lib/kubelet/pods/4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5/volumes" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.374612 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:08:15 crc kubenswrapper[4889]: E1128 07:08:15.375210 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerName="glance-httpd" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.375227 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerName="glance-httpd" Nov 28 07:08:15 crc kubenswrapper[4889]: E1128 07:08:15.375251 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerName="glance-log" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.375258 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerName="glance-log" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.375571 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerName="glance-httpd" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.375599 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abefff5-85b7-4144-aa2c-9fd9cfd8b5d5" containerName="glance-log" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.379248 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.379369 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.381757 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.383583 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.518298 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt672\" (UniqueName: \"kubernetes.io/projected/ab27c833-5fb3-45dd-8bea-5abf637db41a-kube-api-access-dt672\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.518344 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.518373 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.518413 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-logs\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.518481 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.518504 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.518571 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.518598 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.619792 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.619855 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-logs\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.620790 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-logs\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.620884 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.620919 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.620960 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.620976 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.621068 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt672\" (UniqueName: \"kubernetes.io/projected/ab27c833-5fb3-45dd-8bea-5abf637db41a-kube-api-access-dt672\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.621086 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.621217 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.622474 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.625978 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.626121 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.626990 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.628845 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.640179 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt672\" (UniqueName: \"kubernetes.io/projected/ab27c833-5fb3-45dd-8bea-5abf637db41a-kube-api-access-dt672\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.645196 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " pod="openstack/glance-default-external-api-0" Nov 28 07:08:15 crc kubenswrapper[4889]: I1128 07:08:15.706392 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.251359 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:08:16 crc kubenswrapper[4889]: W1128 07:08:16.258947 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab27c833_5fb3_45dd_8bea_5abf637db41a.slice/crio-6d2887eb3793ba8f584c237dc914a74c049157a3f4c3e152aaee30643ce2827c WatchSource:0}: Error finding container 6d2887eb3793ba8f584c237dc914a74c049157a3f4c3e152aaee30643ce2827c: Status 404 returned error can't find the container with id 6d2887eb3793ba8f584c237dc914a74c049157a3f4c3e152aaee30643ce2827c Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.263027 4889 generic.go:334] "Generic (PLEG): container finished" podID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerID="70e1c406d0fcf5e551ec47f18b1c082d733686196c970db748909cbca78472cc" exitCode=0 Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.263055 4889 generic.go:334] "Generic (PLEG): container finished" podID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerID="2bffcc7be4324f61f9b0f367da15245bfa16e5ca9d34469a56244277b4d53278" exitCode=143 Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.263109 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"64a68c3f-d267-44b8-b32c-c7f1579df495","Type":"ContainerDied","Data":"70e1c406d0fcf5e551ec47f18b1c082d733686196c970db748909cbca78472cc"} Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.263135 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"64a68c3f-d267-44b8-b32c-c7f1579df495","Type":"ContainerDied","Data":"2bffcc7be4324f61f9b0f367da15245bfa16e5ca9d34469a56244277b4d53278"} Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.383409 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.543540 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-config-data\") pod \"64a68c3f-d267-44b8-b32c-c7f1579df495\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.543691 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-scripts\") pod \"64a68c3f-d267-44b8-b32c-c7f1579df495\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.543940 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxggs\" (UniqueName: \"kubernetes.io/projected/64a68c3f-d267-44b8-b32c-c7f1579df495-kube-api-access-cxggs\") pod \"64a68c3f-d267-44b8-b32c-c7f1579df495\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.543977 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-combined-ca-bundle\") pod \"64a68c3f-d267-44b8-b32c-c7f1579df495\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.543993 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"64a68c3f-d267-44b8-b32c-c7f1579df495\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.544033 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-logs\") pod \"64a68c3f-d267-44b8-b32c-c7f1579df495\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.544091 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-httpd-run\") pod \"64a68c3f-d267-44b8-b32c-c7f1579df495\" (UID: \"64a68c3f-d267-44b8-b32c-c7f1579df495\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.545181 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "64a68c3f-d267-44b8-b32c-c7f1579df495" (UID: "64a68c3f-d267-44b8-b32c-c7f1579df495"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.548062 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-logs" (OuterVolumeSpecName: "logs") pod "64a68c3f-d267-44b8-b32c-c7f1579df495" (UID: "64a68c3f-d267-44b8-b32c-c7f1579df495"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.552863 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-scripts" (OuterVolumeSpecName: "scripts") pod "64a68c3f-d267-44b8-b32c-c7f1579df495" (UID: "64a68c3f-d267-44b8-b32c-c7f1579df495"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.552877 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "64a68c3f-d267-44b8-b32c-c7f1579df495" (UID: "64a68c3f-d267-44b8-b32c-c7f1579df495"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.563390 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a68c3f-d267-44b8-b32c-c7f1579df495-kube-api-access-cxggs" (OuterVolumeSpecName: "kube-api-access-cxggs") pod "64a68c3f-d267-44b8-b32c-c7f1579df495" (UID: "64a68c3f-d267-44b8-b32c-c7f1579df495"). InnerVolumeSpecName "kube-api-access-cxggs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.583165 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6spsr" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.586287 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64a68c3f-d267-44b8-b32c-c7f1579df495" (UID: "64a68c3f-d267-44b8-b32c-c7f1579df495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.615933 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-config-data" (OuterVolumeSpecName: "config-data") pod "64a68c3f-d267-44b8-b32c-c7f1579df495" (UID: "64a68c3f-d267-44b8-b32c-c7f1579df495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.645781 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.645813 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxggs\" (UniqueName: \"kubernetes.io/projected/64a68c3f-d267-44b8-b32c-c7f1579df495-kube-api-access-cxggs\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.645823 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.645853 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.645863 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.645871 4889 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64a68c3f-d267-44b8-b32c-c7f1579df495-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.645879 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64a68c3f-d267-44b8-b32c-c7f1579df495-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.664512 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.747900 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87c4c\" (UniqueName: \"kubernetes.io/projected/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-kube-api-access-87c4c\") pod \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.747979 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-config-data\") pod \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.748055 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-scripts\") pod \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.748108 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-combined-ca-bundle\") pod \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.748150 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-logs\") pod \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\" (UID: \"851c4202-ebf1-44df-97d1-4c9b9bfd1fba\") " Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.748807 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.749200 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-logs" (OuterVolumeSpecName: "logs") pod "851c4202-ebf1-44df-97d1-4c9b9bfd1fba" (UID: "851c4202-ebf1-44df-97d1-4c9b9bfd1fba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.752759 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-scripts" (OuterVolumeSpecName: "scripts") pod "851c4202-ebf1-44df-97d1-4c9b9bfd1fba" (UID: "851c4202-ebf1-44df-97d1-4c9b9bfd1fba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.755938 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-kube-api-access-87c4c" (OuterVolumeSpecName: "kube-api-access-87c4c") pod "851c4202-ebf1-44df-97d1-4c9b9bfd1fba" (UID: "851c4202-ebf1-44df-97d1-4c9b9bfd1fba"). InnerVolumeSpecName "kube-api-access-87c4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.779937 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-config-data" (OuterVolumeSpecName: "config-data") pod "851c4202-ebf1-44df-97d1-4c9b9bfd1fba" (UID: "851c4202-ebf1-44df-97d1-4c9b9bfd1fba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.785263 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "851c4202-ebf1-44df-97d1-4c9b9bfd1fba" (UID: "851c4202-ebf1-44df-97d1-4c9b9bfd1fba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.850312 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87c4c\" (UniqueName: \"kubernetes.io/projected/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-kube-api-access-87c4c\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.850365 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.850377 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.850387 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:16 crc kubenswrapper[4889]: I1128 07:08:16.850397 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851c4202-ebf1-44df-97d1-4c9b9bfd1fba-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.273069 4889 generic.go:334] "Generic (PLEG): container finished" podID="30f08826-4d6a-453d-8681-52d2446a5918" containerID="1e9eae91f17d3ffa4da9b7b6996803051af34401caec34a002b4fcace79e9594" exitCode=0 Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.273149 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-66vh9" event={"ID":"30f08826-4d6a-453d-8681-52d2446a5918","Type":"ContainerDied","Data":"1e9eae91f17d3ffa4da9b7b6996803051af34401caec34a002b4fcace79e9594"} Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.275404 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab27c833-5fb3-45dd-8bea-5abf637db41a","Type":"ContainerStarted","Data":"e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9"} Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.275432 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab27c833-5fb3-45dd-8bea-5abf637db41a","Type":"ContainerStarted","Data":"6d2887eb3793ba8f584c237dc914a74c049157a3f4c3e152aaee30643ce2827c"} Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.278510 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"64a68c3f-d267-44b8-b32c-c7f1579df495","Type":"ContainerDied","Data":"ef4968ca1324d4cdd224af66c206d7954d71e6a6f9e8003ac9dbe75b3316b1c1"} Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.278541 4889 scope.go:117] "RemoveContainer" containerID="70e1c406d0fcf5e551ec47f18b1c082d733686196c970db748909cbca78472cc" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.278686 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.285844 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6spsr" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.286233 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6spsr" event={"ID":"851c4202-ebf1-44df-97d1-4c9b9bfd1fba","Type":"ContainerDied","Data":"204ee9a156d134e1b1e1c7a495070ac446f9f6e0648fd25a5b56f95a357a400a"} Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.286286 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204ee9a156d134e1b1e1c7a495070ac446f9f6e0648fd25a5b56f95a357a400a" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.294481 4889 generic.go:334] "Generic (PLEG): container finished" podID="fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" containerID="41c5f3c12a42d9eb237eca5b78a8ea4b30fa7324f282831cbd489a0028d90df2" exitCode=0 Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.294542 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vvjhk" event={"ID":"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85","Type":"ContainerDied","Data":"41c5f3c12a42d9eb237eca5b78a8ea4b30fa7324f282831cbd489a0028d90df2"} Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.361412 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.361459 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.390972 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:08:17 crc kubenswrapper[4889]: E1128 07:08:17.391966 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerName="glance-httpd" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.391987 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerName="glance-httpd" Nov 28 07:08:17 crc kubenswrapper[4889]: E1128 07:08:17.391998 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851c4202-ebf1-44df-97d1-4c9b9bfd1fba" containerName="placement-db-sync" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.392005 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="851c4202-ebf1-44df-97d1-4c9b9bfd1fba" containerName="placement-db-sync" Nov 28 07:08:17 crc kubenswrapper[4889]: E1128 07:08:17.392040 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerName="glance-log" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.392048 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerName="glance-log" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.392207 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="851c4202-ebf1-44df-97d1-4c9b9bfd1fba" containerName="placement-db-sync" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.392219 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerName="glance-httpd" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.392238 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" containerName="glance-log" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.397366 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.402929 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.403451 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.439757 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.476426 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5bbc5ddd4-vzclt"] Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.479556 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.482602 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.482637 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.482720 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.482927 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.483626 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fcmpn" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.524586 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bbc5ddd4-vzclt"] Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.573197 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.573248 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.573279 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.573303 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.573341 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6cw\" (UniqueName: \"kubernetes.io/projected/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-kube-api-access-vd6cw\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.573380 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.573405 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.573461 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675269 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-internal-tls-certs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675319 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvfv6\" (UniqueName: \"kubernetes.io/projected/010c335b-59f4-4016-976b-ac71eaf5d14f-kube-api-access-tvfv6\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675360 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675577 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-scripts\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675630 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-config-data\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675752 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-public-tls-certs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675797 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-combined-ca-bundle\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675821 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675844 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675876 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675896 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.675942 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd6cw\" (UniqueName: \"kubernetes.io/projected/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-kube-api-access-vd6cw\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.676007 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/010c335b-59f4-4016-976b-ac71eaf5d14f-logs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.676025 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.676046 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.676446 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.676862 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.678723 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.680992 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.683418 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.686590 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.686731 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.695858 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd6cw\" (UniqueName: \"kubernetes.io/projected/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-kube-api-access-vd6cw\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.710192 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.777746 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-public-tls-certs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.777818 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-combined-ca-bundle\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.777889 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/010c335b-59f4-4016-976b-ac71eaf5d14f-logs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.777934 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-internal-tls-certs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.777958 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvfv6\" (UniqueName: \"kubernetes.io/projected/010c335b-59f4-4016-976b-ac71eaf5d14f-kube-api-access-tvfv6\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.778005 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-scripts\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.778029 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-config-data\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.779080 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/010c335b-59f4-4016-976b-ac71eaf5d14f-logs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.782804 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-combined-ca-bundle\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.783184 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-internal-tls-certs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.785569 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-public-tls-certs\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.785854 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-config-data\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.794922 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.800899 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-scripts\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:17 crc kubenswrapper[4889]: I1128 07:08:17.808193 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvfv6\" (UniqueName: \"kubernetes.io/projected/010c335b-59f4-4016-976b-ac71eaf5d14f-kube-api-access-tvfv6\") pod \"placement-5bbc5ddd4-vzclt\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:18 crc kubenswrapper[4889]: I1128 07:08:18.103219 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:18 crc kubenswrapper[4889]: I1128 07:08:18.305732 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab27c833-5fb3-45dd-8bea-5abf637db41a","Type":"ContainerStarted","Data":"8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0"} Nov 28 07:08:18 crc kubenswrapper[4889]: I1128 07:08:18.338281 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.3382559609999998 podStartE2EDuration="3.338255961s" podCreationTimestamp="2025-11-28 07:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:18.333070644 +0000 UTC m=+1221.303304799" watchObservedRunningTime="2025-11-28 07:08:18.338255961 +0000 UTC m=+1221.308490126" Nov 28 07:08:19 crc kubenswrapper[4889]: I1128 07:08:19.342138 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a68c3f-d267-44b8-b32c-c7f1579df495" path="/var/lib/kubelet/pods/64a68c3f-d267-44b8-b32c-c7f1579df495/volumes" Nov 28 07:08:19 crc kubenswrapper[4889]: I1128 07:08:19.923082 4889 scope.go:117] "RemoveContainer" containerID="2bffcc7be4324f61f9b0f367da15245bfa16e5ca9d34469a56244277b4d53278" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.033843 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-66vh9" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.050337 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222221 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-fernet-keys\") pod \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222276 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-credential-keys\") pod \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222353 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-config-data\") pod \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222391 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zpxn\" (UniqueName: \"kubernetes.io/projected/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-kube-api-access-4zpxn\") pod \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222472 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-config\") pod \"30f08826-4d6a-453d-8681-52d2446a5918\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222495 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-combined-ca-bundle\") pod \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222539 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-combined-ca-bundle\") pod \"30f08826-4d6a-453d-8681-52d2446a5918\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222587 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87kfw\" (UniqueName: \"kubernetes.io/projected/30f08826-4d6a-453d-8681-52d2446a5918-kube-api-access-87kfw\") pod \"30f08826-4d6a-453d-8681-52d2446a5918\" (UID: \"30f08826-4d6a-453d-8681-52d2446a5918\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.222654 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-scripts\") pod \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\" (UID: \"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85\") " Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.228919 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f08826-4d6a-453d-8681-52d2446a5918-kube-api-access-87kfw" (OuterVolumeSpecName: "kube-api-access-87kfw") pod "30f08826-4d6a-453d-8681-52d2446a5918" (UID: "30f08826-4d6a-453d-8681-52d2446a5918"). InnerVolumeSpecName "kube-api-access-87kfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.229140 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" (UID: "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.229272 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-scripts" (OuterVolumeSpecName: "scripts") pod "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" (UID: "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.229595 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-kube-api-access-4zpxn" (OuterVolumeSpecName: "kube-api-access-4zpxn") pod "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" (UID: "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85"). InnerVolumeSpecName "kube-api-access-4zpxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.241451 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" (UID: "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.251843 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-config-data" (OuterVolumeSpecName: "config-data") pod "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" (UID: "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.255474 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-config" (OuterVolumeSpecName: "config") pod "30f08826-4d6a-453d-8681-52d2446a5918" (UID: "30f08826-4d6a-453d-8681-52d2446a5918"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.259095 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" (UID: "fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.266611 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30f08826-4d6a-453d-8681-52d2446a5918" (UID: "30f08826-4d6a-453d-8681-52d2446a5918"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.322764 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-66vh9" event={"ID":"30f08826-4d6a-453d-8681-52d2446a5918","Type":"ContainerDied","Data":"26336e48f1946ac8dea2e47b191c50eacf4012fefe2be158dd127f4d92fc9732"} Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.322802 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26336e48f1946ac8dea2e47b191c50eacf4012fefe2be158dd127f4d92fc9732" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.322868 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-66vh9" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.324654 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.324767 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.324924 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f08826-4d6a-453d-8681-52d2446a5918-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.324987 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87kfw\" (UniqueName: \"kubernetes.io/projected/30f08826-4d6a-453d-8681-52d2446a5918-kube-api-access-87kfw\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.325042 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.325111 4889 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.325166 4889 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.325232 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.325300 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zpxn\" (UniqueName: \"kubernetes.io/projected/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85-kube-api-access-4zpxn\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.338555 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vvjhk" event={"ID":"fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85","Type":"ContainerDied","Data":"b71d00e6b0ae62ab31a73ea135491a7ff32208859fc2693d8177e8aa658f0686"} Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.338674 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b71d00e6b0ae62ab31a73ea135491a7ff32208859fc2693d8177e8aa658f0686" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.338814 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vvjhk" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.644974 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.711362 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7568d75687-h7sjj"] Nov 28 07:08:20 crc kubenswrapper[4889]: I1128 07:08:20.711879 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" podUID="5d4065c3-8078-4446-a480-78054208a993" containerName="dnsmasq-dns" containerID="cri-o://961dff94a3f0a3b5b17904d74804729ed1b0381721f45560e6b3844e2fed7819" gracePeriod=10 Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.210663 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-55c8d644db-cqxsn"] Nov 28 07:08:21 crc kubenswrapper[4889]: E1128 07:08:21.211152 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f08826-4d6a-453d-8681-52d2446a5918" containerName="neutron-db-sync" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.211174 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f08826-4d6a-453d-8681-52d2446a5918" containerName="neutron-db-sync" Nov 28 07:08:21 crc kubenswrapper[4889]: E1128 07:08:21.211217 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" containerName="keystone-bootstrap" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.211226 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" containerName="keystone-bootstrap" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.211481 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" containerName="keystone-bootstrap" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.211523 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f08826-4d6a-453d-8681-52d2446a5918" containerName="neutron-db-sync" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.217770 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.221347 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.221686 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.221864 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.222083 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.222305 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j7bgn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.222455 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.236106 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55c8d644db-cqxsn"] Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.289122 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f677dd449-zh7j5"] Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.290966 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.314597 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f677dd449-zh7j5"] Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.344836 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-fernet-keys\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.344889 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-config-data\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.344982 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-combined-ca-bundle\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.345011 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-credential-keys\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.345036 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzhhs\" (UniqueName: \"kubernetes.io/projected/07dfa6e3-4c33-403d-96c6-819c44224466-kube-api-access-dzhhs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.345075 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-internal-tls-certs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.345112 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-public-tls-certs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.345298 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-scripts\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.362599 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85bffcf884-2hbfs"] Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.364254 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.368260 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85bffcf884-2hbfs"] Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.373772 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.374179 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7sw2k" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.374368 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.377411 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.446799 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-fernet-keys\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.446853 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-config-data\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.446908 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-combined-ca-bundle\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.446938 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfvd5\" (UniqueName: \"kubernetes.io/projected/237dc8d4-ff16-46ab-a728-212825640012-kube-api-access-cfvd5\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.446967 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-credential-keys\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.446991 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzhhs\" (UniqueName: \"kubernetes.io/projected/07dfa6e3-4c33-403d-96c6-819c44224466-kube-api-access-dzhhs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.447036 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-nb\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.447055 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-svc\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.447083 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-internal-tls-certs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.447116 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-swift-storage-0\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.447133 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-sb\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.447163 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-public-tls-certs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.447218 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-config\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.447244 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-scripts\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.453331 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-internal-tls-certs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.472333 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-combined-ca-bundle\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.472931 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-credential-keys\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.472957 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-fernet-keys\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.472982 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-public-tls-certs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.475091 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzhhs\" (UniqueName: \"kubernetes.io/projected/07dfa6e3-4c33-403d-96c6-819c44224466-kube-api-access-dzhhs\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.476457 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-config-data\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.476567 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-scripts\") pod \"keystone-55c8d644db-cqxsn\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.543087 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549221 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfvd5\" (UniqueName: \"kubernetes.io/projected/237dc8d4-ff16-46ab-a728-212825640012-kube-api-access-cfvd5\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549282 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-httpd-config\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549346 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-nb\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549373 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-svc\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549415 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-swift-storage-0\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549437 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-sb\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549466 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-combined-ca-bundle\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549693 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ddlz\" (UniqueName: \"kubernetes.io/projected/acdfb982-66e1-4791-b46a-e6c12765560d-kube-api-access-4ddlz\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549758 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-config\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549834 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-config\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.549860 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-ovndb-tls-certs\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.551147 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-sb\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.551398 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-config\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.551730 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-nb\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.552012 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-swift-storage-0\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.552167 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-svc\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.571598 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfvd5\" (UniqueName: \"kubernetes.io/projected/237dc8d4-ff16-46ab-a728-212825640012-kube-api-access-cfvd5\") pod \"dnsmasq-dns-7f677dd449-zh7j5\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.620872 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.651731 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-config\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.651780 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-ovndb-tls-certs\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.651856 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-httpd-config\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.651916 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-combined-ca-bundle\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.651934 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ddlz\" (UniqueName: \"kubernetes.io/projected/acdfb982-66e1-4791-b46a-e6c12765560d-kube-api-access-4ddlz\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.656492 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-config\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.658477 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-ovndb-tls-certs\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.659293 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-combined-ca-bundle\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.666967 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ddlz\" (UniqueName: \"kubernetes.io/projected/acdfb982-66e1-4791-b46a-e6c12765560d-kube-api-access-4ddlz\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.671957 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-httpd-config\") pod \"neutron-85bffcf884-2hbfs\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:21 crc kubenswrapper[4889]: I1128 07:08:21.726563 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.384691 4889 generic.go:334] "Generic (PLEG): container finished" podID="5d4065c3-8078-4446-a480-78054208a993" containerID="961dff94a3f0a3b5b17904d74804729ed1b0381721f45560e6b3844e2fed7819" exitCode=0 Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.384914 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" event={"ID":"5d4065c3-8078-4446-a480-78054208a993","Type":"ContainerDied","Data":"961dff94a3f0a3b5b17904d74804729ed1b0381721f45560e6b3844e2fed7819"} Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.448403 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.568752 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-config\") pod \"5d4065c3-8078-4446-a480-78054208a993\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.569134 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-sb\") pod \"5d4065c3-8078-4446-a480-78054208a993\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.569174 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txgck\" (UniqueName: \"kubernetes.io/projected/5d4065c3-8078-4446-a480-78054208a993-kube-api-access-txgck\") pod \"5d4065c3-8078-4446-a480-78054208a993\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.569507 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-nb\") pod \"5d4065c3-8078-4446-a480-78054208a993\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.569566 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-svc\") pod \"5d4065c3-8078-4446-a480-78054208a993\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.569610 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-swift-storage-0\") pod \"5d4065c3-8078-4446-a480-78054208a993\" (UID: \"5d4065c3-8078-4446-a480-78054208a993\") " Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.577685 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4065c3-8078-4446-a480-78054208a993-kube-api-access-txgck" (OuterVolumeSpecName: "kube-api-access-txgck") pod "5d4065c3-8078-4446-a480-78054208a993" (UID: "5d4065c3-8078-4446-a480-78054208a993"). InnerVolumeSpecName "kube-api-access-txgck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.628570 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5d4065c3-8078-4446-a480-78054208a993" (UID: "5d4065c3-8078-4446-a480-78054208a993"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.678138 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txgck\" (UniqueName: \"kubernetes.io/projected/5d4065c3-8078-4446-a480-78054208a993-kube-api-access-txgck\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.678174 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.693197 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5d4065c3-8078-4446-a480-78054208a993" (UID: "5d4065c3-8078-4446-a480-78054208a993"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.693530 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5d4065c3-8078-4446-a480-78054208a993" (UID: "5d4065c3-8078-4446-a480-78054208a993"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.708965 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d4065c3-8078-4446-a480-78054208a993" (UID: "5d4065c3-8078-4446-a480-78054208a993"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.779898 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.779927 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.779938 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.786881 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-config" (OuterVolumeSpecName: "config") pod "5d4065c3-8078-4446-a480-78054208a993" (UID: "5d4065c3-8078-4446-a480-78054208a993"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.868634 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bbc5ddd4-vzclt"] Nov 28 07:08:22 crc kubenswrapper[4889]: I1128 07:08:22.903567 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4065c3-8078-4446-a480-78054208a993-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.201798 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.325731 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f677dd449-zh7j5"] Nov 28 07:08:23 crc kubenswrapper[4889]: W1128 07:08:23.343764 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07dfa6e3_4c33_403d_96c6_819c44224466.slice/crio-0cf33f95b58d373300c547c33b8507f7b7ea8baddfc644160d6677460298f59e WatchSource:0}: Error finding container 0cf33f95b58d373300c547c33b8507f7b7ea8baddfc644160d6677460298f59e: Status 404 returned error can't find the container with id 0cf33f95b58d373300c547c33b8507f7b7ea8baddfc644160d6677460298f59e Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.350346 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55c8d644db-cqxsn"] Nov 28 07:08:23 crc kubenswrapper[4889]: W1128 07:08:23.355763 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacdfb982_66e1_4791_b46a_e6c12765560d.slice/crio-42b61ff5a5bfa71bb14c55d9a64b22e1f302454e6398a603c78647177cb97264 WatchSource:0}: Error finding container 42b61ff5a5bfa71bb14c55d9a64b22e1f302454e6398a603c78647177cb97264: Status 404 returned error can't find the container with id 42b61ff5a5bfa71bb14c55d9a64b22e1f302454e6398a603c78647177cb97264 Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.363224 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85bffcf884-2hbfs"] Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.407980 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bffcf884-2hbfs" event={"ID":"acdfb982-66e1-4791-b46a-e6c12765560d","Type":"ContainerStarted","Data":"42b61ff5a5bfa71bb14c55d9a64b22e1f302454e6398a603c78647177cb97264"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.410855 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" event={"ID":"237dc8d4-ff16-46ab-a728-212825640012","Type":"ContainerStarted","Data":"d58bd89b3e96f8536d8844e6d3c8329a52284a1d0a0e5cbd354b8e1e2a019628"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.414800 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bbc5ddd4-vzclt" event={"ID":"010c335b-59f4-4016-976b-ac71eaf5d14f","Type":"ContainerStarted","Data":"916841af475c0d0409c239e605ccdb71c123e2852a495b97c814602f89fea785"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.414841 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bbc5ddd4-vzclt" event={"ID":"010c335b-59f4-4016-976b-ac71eaf5d14f","Type":"ContainerStarted","Data":"ff5c205f4bf58cd1d0ad31c563376d2141a5b307862a95d33a353065a03c5642"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.414850 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bbc5ddd4-vzclt" event={"ID":"010c335b-59f4-4016-976b-ac71eaf5d14f","Type":"ContainerStarted","Data":"06a004a00acde25a83c3ee146f03bec54da5eaf6b1e6cd5ff431b416bcef4b1d"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.415042 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.422670 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a","Type":"ContainerStarted","Data":"2c0de69bdd0f1b11807ebfc3ce4ebf809b4136164f78946ad3b0a1e8f273c450"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.424421 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-578vb" event={"ID":"cb878697-faf4-4e49-9d9c-54f02215856b","Type":"ContainerStarted","Data":"9fd6cb9711212f1b50db5fb86ef597f96c8e223cca5df078fa1c7ccf1975f3c1"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.429056 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55c8d644db-cqxsn" event={"ID":"07dfa6e3-4c33-403d-96c6-819c44224466","Type":"ContainerStarted","Data":"0cf33f95b58d373300c547c33b8507f7b7ea8baddfc644160d6677460298f59e"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.442364 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5bbc5ddd4-vzclt" podStartSLOduration=6.442347164 podStartE2EDuration="6.442347164s" podCreationTimestamp="2025-11-28 07:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:23.432431431 +0000 UTC m=+1226.402665596" watchObservedRunningTime="2025-11-28 07:08:23.442347164 +0000 UTC m=+1226.412581319" Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.442681 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerStarted","Data":"32ca4733b23849d00eb4a8f9251b1f248f60fb9aa667e42fe8ec2eb636774cc9"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.444424 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" event={"ID":"5d4065c3-8078-4446-a480-78054208a993","Type":"ContainerDied","Data":"0f03db7fcd8fccf2583352cd98d9bb6d77f7f14fb16bcb63fdf8bf589617fefa"} Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.444457 4889 scope.go:117] "RemoveContainer" containerID="961dff94a3f0a3b5b17904d74804729ed1b0381721f45560e6b3844e2fed7819" Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.444579 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7568d75687-h7sjj" Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.453674 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-578vb" podStartSLOduration=2.20599447 podStartE2EDuration="33.453656758s" podCreationTimestamp="2025-11-28 07:07:50 +0000 UTC" firstStartedPulling="2025-11-28 07:07:51.793833091 +0000 UTC m=+1194.764067246" lastFinishedPulling="2025-11-28 07:08:23.041495379 +0000 UTC m=+1226.011729534" observedRunningTime="2025-11-28 07:08:23.445898514 +0000 UTC m=+1226.416132669" watchObservedRunningTime="2025-11-28 07:08:23.453656758 +0000 UTC m=+1226.423890913" Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.497101 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7568d75687-h7sjj"] Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.503677 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7568d75687-h7sjj"] Nov 28 07:08:23 crc kubenswrapper[4889]: I1128 07:08:23.522326 4889 scope.go:117] "RemoveContainer" containerID="1e85dcd1deb990841d2aae87fd781728a3695f0a5331ae9f0d27e49767050bb5" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.452769 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.603155 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c99d75dcc-cgtnj"] Nov 28 07:08:24 crc kubenswrapper[4889]: E1128 07:08:24.603526 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4065c3-8078-4446-a480-78054208a993" containerName="dnsmasq-dns" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.603542 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4065c3-8078-4446-a480-78054208a993" containerName="dnsmasq-dns" Nov 28 07:08:24 crc kubenswrapper[4889]: E1128 07:08:24.603570 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4065c3-8078-4446-a480-78054208a993" containerName="init" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.603577 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4065c3-8078-4446-a480-78054208a993" containerName="init" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.603955 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4065c3-8078-4446-a480-78054208a993" containerName="dnsmasq-dns" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.605041 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.606662 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.609603 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.636621 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c99d75dcc-cgtnj"] Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.741286 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-config\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.741808 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-combined-ca-bundle\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.741856 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-internal-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.742209 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-public-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.742236 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-httpd-config\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.742265 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-ovndb-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.742304 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lksrx\" (UniqueName: \"kubernetes.io/projected/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-kube-api-access-lksrx\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.843900 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-public-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.843941 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-httpd-config\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.843969 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-ovndb-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.843993 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lksrx\" (UniqueName: \"kubernetes.io/projected/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-kube-api-access-lksrx\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.844018 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-config\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.844044 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-combined-ca-bundle\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.844066 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-internal-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.850029 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-internal-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.850637 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-combined-ca-bundle\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.850970 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-config\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.851245 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-public-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.851517 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-httpd-config\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.852037 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-ovndb-tls-certs\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.862826 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lksrx\" (UniqueName: \"kubernetes.io/projected/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-kube-api-access-lksrx\") pod \"neutron-5c99d75dcc-cgtnj\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:24 crc kubenswrapper[4889]: I1128 07:08:24.920854 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.396465 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4065c3-8078-4446-a480-78054208a993" path="/var/lib/kubelet/pods/5d4065c3-8078-4446-a480-78054208a993/volumes" Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.446631 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c99d75dcc-cgtnj"] Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.480735 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bffcf884-2hbfs" event={"ID":"acdfb982-66e1-4791-b46a-e6c12765560d","Type":"ContainerStarted","Data":"dfa5003630dcf04869c1dc354bcd218515a456f185def3d88418656c8e6216cd"} Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.487208 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" event={"ID":"237dc8d4-ff16-46ab-a728-212825640012","Type":"ContainerStarted","Data":"17567b5a3d3d57af093814d7190ae83ca8c790a8d08e6ba123758607964656f4"} Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.496065 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c99d75dcc-cgtnj" event={"ID":"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef","Type":"ContainerStarted","Data":"220fc208c1d6de01525925b2f0eca71da7504dcbb2ddbc8205fa6b81c046785d"} Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.497908 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a","Type":"ContainerStarted","Data":"7ffdbe6958c1311ee48cd241ca1d53677f0cc72b4499994d7a16263eff3b6487"} Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.500467 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55c8d644db-cqxsn" event={"ID":"07dfa6e3-4c33-403d-96c6-819c44224466","Type":"ContainerStarted","Data":"6a06f1ca551a6cfc2a03c4624310248aaa2f03752d3fc88f4cfb44ec7049ede3"} Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.708157 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.710200 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.746420 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 07:08:25 crc kubenswrapper[4889]: I1128 07:08:25.750272 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 07:08:26 crc kubenswrapper[4889]: I1128 07:08:26.515255 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c99d75dcc-cgtnj" event={"ID":"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef","Type":"ContainerStarted","Data":"7ca5f31a155561a771625bbbaea4e69473efa075212d50535e793088359bdafe"} Nov 28 07:08:26 crc kubenswrapper[4889]: I1128 07:08:26.518533 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bffcf884-2hbfs" event={"ID":"acdfb982-66e1-4791-b46a-e6c12765560d","Type":"ContainerStarted","Data":"c0d210825e05d6b60ab162be789d8a9af6b29bc796f54dbf9281d496f2b52198"} Nov 28 07:08:26 crc kubenswrapper[4889]: I1128 07:08:26.520471 4889 generic.go:334] "Generic (PLEG): container finished" podID="237dc8d4-ff16-46ab-a728-212825640012" containerID="17567b5a3d3d57af093814d7190ae83ca8c790a8d08e6ba123758607964656f4" exitCode=0 Nov 28 07:08:26 crc kubenswrapper[4889]: I1128 07:08:26.520495 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" event={"ID":"237dc8d4-ff16-46ab-a728-212825640012","Type":"ContainerDied","Data":"17567b5a3d3d57af093814d7190ae83ca8c790a8d08e6ba123758607964656f4"} Nov 28 07:08:26 crc kubenswrapper[4889]: I1128 07:08:26.521347 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 07:08:26 crc kubenswrapper[4889]: I1128 07:08:26.521371 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 07:08:26 crc kubenswrapper[4889]: I1128 07:08:26.577051 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-55c8d644db-cqxsn" podStartSLOduration=5.577034936 podStartE2EDuration="5.577034936s" podCreationTimestamp="2025-11-28 07:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:26.574103143 +0000 UTC m=+1229.544337298" watchObservedRunningTime="2025-11-28 07:08:26.577034936 +0000 UTC m=+1229.547269091" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.530246 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c99d75dcc-cgtnj" event={"ID":"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef","Type":"ContainerStarted","Data":"a93ee07b36d6a14a36fd7e9a347daa5368daca2560780121eb1189c052d07f4b"} Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.530766 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.532727 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a","Type":"ContainerStarted","Data":"85b12a258d7e9a073bbd73f28c81297df92852331940f5dbfda2d3b0a81b900f"} Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.536220 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" event={"ID":"237dc8d4-ff16-46ab-a728-212825640012","Type":"ContainerStarted","Data":"ac2380e4ed7c696e5a1f6ff73ec36b3f5e62b95b602f8e7c48f289056db38027"} Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.537559 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.569970 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c99d75dcc-cgtnj" podStartSLOduration=3.569945987 podStartE2EDuration="3.569945987s" podCreationTimestamp="2025-11-28 07:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:27.552303993 +0000 UTC m=+1230.522538158" watchObservedRunningTime="2025-11-28 07:08:27.569945987 +0000 UTC m=+1230.540180142" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.607659 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.607639715 podStartE2EDuration="10.607639715s" podCreationTimestamp="2025-11-28 07:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:27.582166194 +0000 UTC m=+1230.552400349" watchObservedRunningTime="2025-11-28 07:08:27.607639715 +0000 UTC m=+1230.577873870" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.613506 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85bffcf884-2hbfs" podStartSLOduration=6.613489452 podStartE2EDuration="6.613489452s" podCreationTimestamp="2025-11-28 07:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:27.603950012 +0000 UTC m=+1230.574184177" watchObservedRunningTime="2025-11-28 07:08:27.613489452 +0000 UTC m=+1230.583723607" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.635031 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" podStartSLOduration=6.635011174 podStartE2EDuration="6.635011174s" podCreationTimestamp="2025-11-28 07:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:27.624663733 +0000 UTC m=+1230.594897918" watchObservedRunningTime="2025-11-28 07:08:27.635011174 +0000 UTC m=+1230.605245339" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.796568 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.796621 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.827531 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:27 crc kubenswrapper[4889]: I1128 07:08:27.855443 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.551279 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q5vz8" event={"ID":"76a51e5e-b005-4d01-b0a3-86f27d671c32","Type":"ContainerStarted","Data":"8920163e2dc589b5513e696a85abe83032ed5431a721238125a0aa7d7e0d0f9b"} Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.563069 4889 generic.go:334] "Generic (PLEG): container finished" podID="cb878697-faf4-4e49-9d9c-54f02215856b" containerID="9fd6cb9711212f1b50db5fb86ef597f96c8e223cca5df078fa1c7ccf1975f3c1" exitCode=0 Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.563371 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-578vb" event={"ID":"cb878697-faf4-4e49-9d9c-54f02215856b","Type":"ContainerDied","Data":"9fd6cb9711212f1b50db5fb86ef597f96c8e223cca5df078fa1c7ccf1975f3c1"} Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.563398 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.563635 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.564015 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.570622 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-q5vz8" podStartSLOduration=3.175432589 podStartE2EDuration="38.570605201s" podCreationTimestamp="2025-11-28 07:07:50 +0000 UTC" firstStartedPulling="2025-11-28 07:07:51.546695663 +0000 UTC m=+1194.516929818" lastFinishedPulling="2025-11-28 07:08:26.941868275 +0000 UTC m=+1229.912102430" observedRunningTime="2025-11-28 07:08:28.565470622 +0000 UTC m=+1231.535704777" watchObservedRunningTime="2025-11-28 07:08:28.570605201 +0000 UTC m=+1231.540839356" Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.639225 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.639311 4889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.701409 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.783090 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:08:28 crc kubenswrapper[4889]: I1128 07:08:28.783155 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:08:31 crc kubenswrapper[4889]: I1128 07:08:31.518431 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:31 crc kubenswrapper[4889]: I1128 07:08:31.525663 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 07:08:31 crc kubenswrapper[4889]: I1128 07:08:31.622971 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:31 crc kubenswrapper[4889]: I1128 07:08:31.696816 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f645789c-7wjjp"] Nov 28 07:08:31 crc kubenswrapper[4889]: I1128 07:08:31.697049 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" podUID="d7499407-c822-4002-8f62-d423b29d39ab" containerName="dnsmasq-dns" containerID="cri-o://c6f45807a67961737d3da251c09401a7668cf8963ef7251d5407b3781910d3ef" gracePeriod=10 Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.237093 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-578vb" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.318713 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-db-sync-config-data\") pod \"cb878697-faf4-4e49-9d9c-54f02215856b\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.318863 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-combined-ca-bundle\") pod \"cb878697-faf4-4e49-9d9c-54f02215856b\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.318939 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zh9n\" (UniqueName: \"kubernetes.io/projected/cb878697-faf4-4e49-9d9c-54f02215856b-kube-api-access-4zh9n\") pod \"cb878697-faf4-4e49-9d9c-54f02215856b\" (UID: \"cb878697-faf4-4e49-9d9c-54f02215856b\") " Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.324901 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb878697-faf4-4e49-9d9c-54f02215856b-kube-api-access-4zh9n" (OuterVolumeSpecName: "kube-api-access-4zh9n") pod "cb878697-faf4-4e49-9d9c-54f02215856b" (UID: "cb878697-faf4-4e49-9d9c-54f02215856b"). InnerVolumeSpecName "kube-api-access-4zh9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.343851 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cb878697-faf4-4e49-9d9c-54f02215856b" (UID: "cb878697-faf4-4e49-9d9c-54f02215856b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.353246 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb878697-faf4-4e49-9d9c-54f02215856b" (UID: "cb878697-faf4-4e49-9d9c-54f02215856b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.421213 4889 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.421242 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb878697-faf4-4e49-9d9c-54f02215856b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.421252 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zh9n\" (UniqueName: \"kubernetes.io/projected/cb878697-faf4-4e49-9d9c-54f02215856b-kube-api-access-4zh9n\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.609717 4889 generic.go:334] "Generic (PLEG): container finished" podID="76a51e5e-b005-4d01-b0a3-86f27d671c32" containerID="8920163e2dc589b5513e696a85abe83032ed5431a721238125a0aa7d7e0d0f9b" exitCode=0 Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.609798 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q5vz8" event={"ID":"76a51e5e-b005-4d01-b0a3-86f27d671c32","Type":"ContainerDied","Data":"8920163e2dc589b5513e696a85abe83032ed5431a721238125a0aa7d7e0d0f9b"} Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.614635 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-578vb" event={"ID":"cb878697-faf4-4e49-9d9c-54f02215856b","Type":"ContainerDied","Data":"d76b1485721804a84995fac033c230daf341ff8ac6e3079f5a43d07c1ad40265"} Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.614675 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d76b1485721804a84995fac033c230daf341ff8ac6e3079f5a43d07c1ad40265" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.614759 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-578vb" Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.621244 4889 generic.go:334] "Generic (PLEG): container finished" podID="d7499407-c822-4002-8f62-d423b29d39ab" containerID="c6f45807a67961737d3da251c09401a7668cf8963ef7251d5407b3781910d3ef" exitCode=0 Nov 28 07:08:32 crc kubenswrapper[4889]: I1128 07:08:32.621285 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" event={"ID":"d7499407-c822-4002-8f62-d423b29d39ab","Type":"ContainerDied","Data":"c6f45807a67961737d3da251c09401a7668cf8963ef7251d5407b3781910d3ef"} Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.445266 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-855dc646d8-klfjs"] Nov 28 07:08:33 crc kubenswrapper[4889]: E1128 07:08:33.446311 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb878697-faf4-4e49-9d9c-54f02215856b" containerName="barbican-db-sync" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.446384 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb878697-faf4-4e49-9d9c-54f02215856b" containerName="barbican-db-sync" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.446607 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb878697-faf4-4e49-9d9c-54f02215856b" containerName="barbican-db-sync" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.447617 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.453745 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.453825 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.453954 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s5g4w" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.464870 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-855dc646d8-klfjs"] Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.512246 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-59dcb6998f-sb4k2"] Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.513798 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.532352 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.542222 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qhh\" (UniqueName: \"kubernetes.io/projected/d29dfd27-459d-4ade-8119-3c84095d0b1b-kube-api-access-b6qhh\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.542289 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.542359 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data-custom\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.542402 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-combined-ca-bundle\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.542533 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d29dfd27-459d-4ade-8119-3c84095d0b1b-logs\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.675776 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data-custom\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.675898 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d29dfd27-459d-4ade-8119-3c84095d0b1b-logs\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.676009 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-combined-ca-bundle\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.676048 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qhh\" (UniqueName: \"kubernetes.io/projected/d29dfd27-459d-4ade-8119-3c84095d0b1b-kube-api-access-b6qhh\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.676067 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.676096 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741842f5-b565-43c8-bd99-eb15782fcf18-logs\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.676110 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dz7\" (UniqueName: \"kubernetes.io/projected/741842f5-b565-43c8-bd99-eb15782fcf18-kube-api-access-m8dz7\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.676160 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data-custom\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.676183 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.676241 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-combined-ca-bundle\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.678059 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59dcb6998f-sb4k2"] Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.679492 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d29dfd27-459d-4ade-8119-3c84095d0b1b-logs\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.696490 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-combined-ca-bundle\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.697134 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data-custom\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.703995 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64dfd64c45-jvlj5"] Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.705986 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.706861 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.713127 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qhh\" (UniqueName: \"kubernetes.io/projected/d29dfd27-459d-4ade-8119-3c84095d0b1b-kube-api-access-b6qhh\") pod \"barbican-keystone-listener-855dc646d8-klfjs\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.713984 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64dfd64c45-jvlj5"] Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.768848 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b7cd58688-jtgqv"] Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.770357 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.776421 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777594 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-combined-ca-bundle\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777634 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741842f5-b565-43c8-bd99-eb15782fcf18-logs\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777661 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dz7\" (UniqueName: \"kubernetes.io/projected/741842f5-b565-43c8-bd99-eb15782fcf18-kube-api-access-m8dz7\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777683 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-svc\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777698 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf2xl\" (UniqueName: \"kubernetes.io/projected/36bf3778-0496-4590-a2fd-18516ee7c057-kube-api-access-gf2xl\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777746 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777766 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-config\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777789 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-swift-storage-0\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777825 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data-custom\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777878 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-nb\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.777903 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-sb\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.778452 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.779212 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741842f5-b565-43c8-bd99-eb15782fcf18-logs\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.782638 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7cd58688-jtgqv"] Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.782823 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data-custom\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.784812 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.789083 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-combined-ca-bundle\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.802511 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dz7\" (UniqueName: \"kubernetes.io/projected/741842f5-b565-43c8-bd99-eb15782fcf18-kube-api-access-m8dz7\") pod \"barbican-worker-59dcb6998f-sb4k2\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879449 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-nb\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879507 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-sb\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879535 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data-custom\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879565 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-combined-ca-bundle\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879591 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-svc\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879608 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf2xl\" (UniqueName: \"kubernetes.io/projected/36bf3778-0496-4590-a2fd-18516ee7c057-kube-api-access-gf2xl\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879643 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-config\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879660 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-logs\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879676 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879694 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-swift-storage-0\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.879757 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-kube-api-access-xf4kh\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.883055 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-sb\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.883653 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-config\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.884116 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-nb\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.888343 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-svc\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.888376 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-swift-storage-0\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.903641 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf2xl\" (UniqueName: \"kubernetes.io/projected/36bf3778-0496-4590-a2fd-18516ee7c057-kube-api-access-gf2xl\") pod \"dnsmasq-dns-64dfd64c45-jvlj5\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.924180 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.982059 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.982116 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-logs\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.982202 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-kube-api-access-xf4kh\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.982288 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data-custom\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.982328 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-combined-ca-bundle\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.984315 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-logs\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.988237 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.990064 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-combined-ca-bundle\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:33 crc kubenswrapper[4889]: I1128 07:08:33.993240 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data-custom\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.007438 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-kube-api-access-xf4kh\") pod \"barbican-api-5b7cd58688-jtgqv\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.171673 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.176748 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.177553 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.189213 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.291401 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-svc\") pod \"d7499407-c822-4002-8f62-d423b29d39ab\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.291842 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhw7r\" (UniqueName: \"kubernetes.io/projected/d7499407-c822-4002-8f62-d423b29d39ab-kube-api-access-hhw7r\") pod \"d7499407-c822-4002-8f62-d423b29d39ab\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.291867 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-nb\") pod \"d7499407-c822-4002-8f62-d423b29d39ab\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.291943 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-db-sync-config-data\") pod \"76a51e5e-b005-4d01-b0a3-86f27d671c32\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.291968 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-config\") pod \"d7499407-c822-4002-8f62-d423b29d39ab\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.291996 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-swift-storage-0\") pod \"d7499407-c822-4002-8f62-d423b29d39ab\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.292053 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76a51e5e-b005-4d01-b0a3-86f27d671c32-etc-machine-id\") pod \"76a51e5e-b005-4d01-b0a3-86f27d671c32\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.292109 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-sb\") pod \"d7499407-c822-4002-8f62-d423b29d39ab\" (UID: \"d7499407-c822-4002-8f62-d423b29d39ab\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.292165 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8rqk\" (UniqueName: \"kubernetes.io/projected/76a51e5e-b005-4d01-b0a3-86f27d671c32-kube-api-access-v8rqk\") pod \"76a51e5e-b005-4d01-b0a3-86f27d671c32\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.292198 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-config-data\") pod \"76a51e5e-b005-4d01-b0a3-86f27d671c32\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.292228 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-combined-ca-bundle\") pod \"76a51e5e-b005-4d01-b0a3-86f27d671c32\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.292243 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-scripts\") pod \"76a51e5e-b005-4d01-b0a3-86f27d671c32\" (UID: \"76a51e5e-b005-4d01-b0a3-86f27d671c32\") " Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.296114 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a51e5e-b005-4d01-b0a3-86f27d671c32-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "76a51e5e-b005-4d01-b0a3-86f27d671c32" (UID: "76a51e5e-b005-4d01-b0a3-86f27d671c32"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.299805 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-scripts" (OuterVolumeSpecName: "scripts") pod "76a51e5e-b005-4d01-b0a3-86f27d671c32" (UID: "76a51e5e-b005-4d01-b0a3-86f27d671c32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.299912 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a51e5e-b005-4d01-b0a3-86f27d671c32-kube-api-access-v8rqk" (OuterVolumeSpecName: "kube-api-access-v8rqk") pod "76a51e5e-b005-4d01-b0a3-86f27d671c32" (UID: "76a51e5e-b005-4d01-b0a3-86f27d671c32"). InnerVolumeSpecName "kube-api-access-v8rqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.302652 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "76a51e5e-b005-4d01-b0a3-86f27d671c32" (UID: "76a51e5e-b005-4d01-b0a3-86f27d671c32"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.313688 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7499407-c822-4002-8f62-d423b29d39ab-kube-api-access-hhw7r" (OuterVolumeSpecName: "kube-api-access-hhw7r") pod "d7499407-c822-4002-8f62-d423b29d39ab" (UID: "d7499407-c822-4002-8f62-d423b29d39ab"). InnerVolumeSpecName "kube-api-access-hhw7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.383328 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-config" (OuterVolumeSpecName: "config") pod "d7499407-c822-4002-8f62-d423b29d39ab" (UID: "d7499407-c822-4002-8f62-d423b29d39ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.394919 4889 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.394956 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.394967 4889 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76a51e5e-b005-4d01-b0a3-86f27d671c32-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.394976 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8rqk\" (UniqueName: \"kubernetes.io/projected/76a51e5e-b005-4d01-b0a3-86f27d671c32-kube-api-access-v8rqk\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.394987 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.394995 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhw7r\" (UniqueName: \"kubernetes.io/projected/d7499407-c822-4002-8f62-d423b29d39ab-kube-api-access-hhw7r\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.399910 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7499407-c822-4002-8f62-d423b29d39ab" (UID: "d7499407-c822-4002-8f62-d423b29d39ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.419778 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7499407-c822-4002-8f62-d423b29d39ab" (UID: "d7499407-c822-4002-8f62-d423b29d39ab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.421964 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7499407-c822-4002-8f62-d423b29d39ab" (UID: "d7499407-c822-4002-8f62-d423b29d39ab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.424279 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-config-data" (OuterVolumeSpecName: "config-data") pod "76a51e5e-b005-4d01-b0a3-86f27d671c32" (UID: "76a51e5e-b005-4d01-b0a3-86f27d671c32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.424797 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76a51e5e-b005-4d01-b0a3-86f27d671c32" (UID: "76a51e5e-b005-4d01-b0a3-86f27d671c32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.438203 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7499407-c822-4002-8f62-d423b29d39ab" (UID: "d7499407-c822-4002-8f62-d423b29d39ab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.497745 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.497779 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.497794 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.497806 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a51e5e-b005-4d01-b0a3-86f27d671c32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.497819 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.497831 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7499407-c822-4002-8f62-d423b29d39ab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.689398 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.690179 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f645789c-7wjjp" event={"ID":"d7499407-c822-4002-8f62-d423b29d39ab","Type":"ContainerDied","Data":"5d02fb46e0d25524880b239d68bb3740c462ac1c05e583a090cb325035fe49f5"} Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.690262 4889 scope.go:117] "RemoveContainer" containerID="c6f45807a67961737d3da251c09401a7668cf8963ef7251d5407b3781910d3ef" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.724573 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerStarted","Data":"99c5709a5fffdab99f7a8bb562d2116dd62a143f03718002ff7099453c0a4c28"} Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.724774 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.724839 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="proxy-httpd" containerID="cri-o://99c5709a5fffdab99f7a8bb562d2116dd62a143f03718002ff7099453c0a4c28" gracePeriod=30 Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.724937 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="sg-core" containerID="cri-o://32ca4733b23849d00eb4a8f9251b1f248f60fb9aa667e42fe8ec2eb636774cc9" gracePeriod=30 Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.724994 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="ceilometer-notification-agent" containerID="cri-o://9497167b757f1ed2baafd4074c4a1ff36ec970c47ae28055165b5a066f9555c0" gracePeriod=30 Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.726044 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="ceilometer-central-agent" containerID="cri-o://0984b6f70d98fe9863232af5cccbd01580ad25285a24a1a8065176170211d4d0" gracePeriod=30 Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.731598 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q5vz8" event={"ID":"76a51e5e-b005-4d01-b0a3-86f27d671c32","Type":"ContainerDied","Data":"cce8ffe603b9aac8cbf56eb5fa8be362c7a6b6764f5496d532dba1dde720312c"} Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.731649 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce8ffe603b9aac8cbf56eb5fa8be362c7a6b6764f5496d532dba1dde720312c" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.731761 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q5vz8" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.735098 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59dcb6998f-sb4k2"] Nov 28 07:08:34 crc kubenswrapper[4889]: W1128 07:08:34.751132 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod741842f5_b565_43c8_bd99_eb15782fcf18.slice/crio-4a49f113408cfaf534228d55a531c628341bd1c8e1ff7b97aaccabb75131ebae WatchSource:0}: Error finding container 4a49f113408cfaf534228d55a531c628341bd1c8e1ff7b97aaccabb75131ebae: Status 404 returned error can't find the container with id 4a49f113408cfaf534228d55a531c628341bd1c8e1ff7b97aaccabb75131ebae Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.776012 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.322800464 podStartE2EDuration="44.775993267s" podCreationTimestamp="2025-11-28 07:07:50 +0000 UTC" firstStartedPulling="2025-11-28 07:07:51.79823962 +0000 UTC m=+1194.768473765" lastFinishedPulling="2025-11-28 07:08:34.251432413 +0000 UTC m=+1237.221666568" observedRunningTime="2025-11-28 07:08:34.755119085 +0000 UTC m=+1237.725353240" watchObservedRunningTime="2025-11-28 07:08:34.775993267 +0000 UTC m=+1237.746227412" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.781492 4889 scope.go:117] "RemoveContainer" containerID="65bfe0fccfe2242969d58b76536a5c635e834e06f989b6ce604dddf0da785469" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.792936 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f645789c-7wjjp"] Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.815886 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f645789c-7wjjp"] Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.836173 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64dfd64c45-jvlj5"] Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.845243 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7cd58688-jtgqv"] Nov 28 07:08:34 crc kubenswrapper[4889]: W1128 07:08:34.848367 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36bf3778_0496_4590_a2fd_18516ee7c057.slice/crio-92fd46b0c31d359e15e5a8a05f388fd56b54efddfb87a750246b847e37491db7 WatchSource:0}: Error finding container 92fd46b0c31d359e15e5a8a05f388fd56b54efddfb87a750246b847e37491db7: Status 404 returned error can't find the container with id 92fd46b0c31d359e15e5a8a05f388fd56b54efddfb87a750246b847e37491db7 Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.881925 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-855dc646d8-klfjs"] Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.933549 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:34 crc kubenswrapper[4889]: E1128 07:08:34.934227 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7499407-c822-4002-8f62-d423b29d39ab" containerName="dnsmasq-dns" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.934242 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7499407-c822-4002-8f62-d423b29d39ab" containerName="dnsmasq-dns" Nov 28 07:08:34 crc kubenswrapper[4889]: E1128 07:08:34.934271 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7499407-c822-4002-8f62-d423b29d39ab" containerName="init" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.934276 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7499407-c822-4002-8f62-d423b29d39ab" containerName="init" Nov 28 07:08:34 crc kubenswrapper[4889]: E1128 07:08:34.934291 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a51e5e-b005-4d01-b0a3-86f27d671c32" containerName="cinder-db-sync" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.934297 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a51e5e-b005-4d01-b0a3-86f27d671c32" containerName="cinder-db-sync" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.934558 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a51e5e-b005-4d01-b0a3-86f27d671c32" containerName="cinder-db-sync" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.934573 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7499407-c822-4002-8f62-d423b29d39ab" containerName="dnsmasq-dns" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.935780 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.941242 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9qf8k" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.941941 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.942074 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.943150 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 07:08:34 crc kubenswrapper[4889]: I1128 07:08:34.949819 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.009874 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-scripts\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.009963 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.009982 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.009996 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.010027 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.010046 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8gdf\" (UniqueName: \"kubernetes.io/projected/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-kube-api-access-c8gdf\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.021784 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64dfd64c45-jvlj5"] Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.064468 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8ccb5c7cf-k9n2l"] Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.066313 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.108854 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8ccb5c7cf-k9n2l"] Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.131465 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.131536 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.131566 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.131648 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.131687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8gdf\" (UniqueName: \"kubernetes.io/projected/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-kube-api-access-c8gdf\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.131944 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-scripts\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.132847 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.152594 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-scripts\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.152627 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.154916 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.156846 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.161829 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8gdf\" (UniqueName: \"kubernetes.io/projected/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-kube-api-access-c8gdf\") pod \"cinder-scheduler-0\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.190334 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.215803 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.225171 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.229017 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.234369 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-swift-storage-0\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.234405 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-nb\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.234503 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-config\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.234531 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwz9\" (UniqueName: \"kubernetes.io/projected/ed8cab85-cee8-4604-9898-9215c05dbe9d-kube-api-access-brwz9\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.234548 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-sb\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.234596 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-svc\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.290840 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336032 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004d9422-ac6d-4fdd-bda8-4501b0a01358-logs\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336081 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data-custom\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336129 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7sbg\" (UniqueName: \"kubernetes.io/projected/004d9422-ac6d-4fdd-bda8-4501b0a01358-kube-api-access-m7sbg\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336148 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/004d9422-ac6d-4fdd-bda8-4501b0a01358-etc-machine-id\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336181 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-config\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336209 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brwz9\" (UniqueName: \"kubernetes.io/projected/ed8cab85-cee8-4604-9898-9215c05dbe9d-kube-api-access-brwz9\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336227 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-sb\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336253 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336275 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336305 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-svc\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336322 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-scripts\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336349 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-swift-storage-0\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.336365 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-nb\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.337173 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-nb\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.337258 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-config\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.337262 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-sb\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.337472 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-svc\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.337671 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-swift-storage-0\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.342885 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7499407-c822-4002-8f62-d423b29d39ab" path="/var/lib/kubelet/pods/d7499407-c822-4002-8f62-d423b29d39ab/volumes" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.366093 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brwz9\" (UniqueName: \"kubernetes.io/projected/ed8cab85-cee8-4604-9898-9215c05dbe9d-kube-api-access-brwz9\") pod \"dnsmasq-dns-8ccb5c7cf-k9n2l\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.451678 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7sbg\" (UniqueName: \"kubernetes.io/projected/004d9422-ac6d-4fdd-bda8-4501b0a01358-kube-api-access-m7sbg\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.451759 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/004d9422-ac6d-4fdd-bda8-4501b0a01358-etc-machine-id\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.451823 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.452390 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.452436 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-scripts\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.452530 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004d9422-ac6d-4fdd-bda8-4501b0a01358-logs\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.452552 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data-custom\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.454978 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/004d9422-ac6d-4fdd-bda8-4501b0a01358-etc-machine-id\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.455365 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.455472 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004d9422-ac6d-4fdd-bda8-4501b0a01358-logs\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.463422 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-scripts\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.464616 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.467279 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7sbg\" (UniqueName: \"kubernetes.io/projected/004d9422-ac6d-4fdd-bda8-4501b0a01358-kube-api-access-m7sbg\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.468073 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.469379 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data-custom\") pod \"cinder-api-0\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.539730 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.764760 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.776759 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dcb6998f-sb4k2" event={"ID":"741842f5-b565-43c8-bd99-eb15782fcf18","Type":"ContainerStarted","Data":"4a49f113408cfaf534228d55a531c628341bd1c8e1ff7b97aaccabb75131ebae"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.793844 4889 generic.go:334] "Generic (PLEG): container finished" podID="36bf3778-0496-4590-a2fd-18516ee7c057" containerID="214f2642b4dedc121f70b497b93aa298d7a119ae3effa28ad4bee465f87fbac6" exitCode=0 Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.793949 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" event={"ID":"36bf3778-0496-4590-a2fd-18516ee7c057","Type":"ContainerDied","Data":"214f2642b4dedc121f70b497b93aa298d7a119ae3effa28ad4bee465f87fbac6"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.793975 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" event={"ID":"36bf3778-0496-4590-a2fd-18516ee7c057","Type":"ContainerStarted","Data":"92fd46b0c31d359e15e5a8a05f388fd56b54efddfb87a750246b847e37491db7"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.800610 4889 generic.go:334] "Generic (PLEG): container finished" podID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerID="99c5709a5fffdab99f7a8bb562d2116dd62a143f03718002ff7099453c0a4c28" exitCode=0 Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.800634 4889 generic.go:334] "Generic (PLEG): container finished" podID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerID="32ca4733b23849d00eb4a8f9251b1f248f60fb9aa667e42fe8ec2eb636774cc9" exitCode=2 Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.800641 4889 generic.go:334] "Generic (PLEG): container finished" podID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerID="0984b6f70d98fe9863232af5cccbd01580ad25285a24a1a8065176170211d4d0" exitCode=0 Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.800676 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerDied","Data":"99c5709a5fffdab99f7a8bb562d2116dd62a143f03718002ff7099453c0a4c28"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.800699 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerDied","Data":"32ca4733b23849d00eb4a8f9251b1f248f60fb9aa667e42fe8ec2eb636774cc9"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.800725 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerDied","Data":"0984b6f70d98fe9863232af5cccbd01580ad25285a24a1a8065176170211d4d0"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.804592 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7cd58688-jtgqv" event={"ID":"0ad4880a-047e-4ea2-8a17-7c5d2e706adb","Type":"ContainerStarted","Data":"931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.804634 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7cd58688-jtgqv" event={"ID":"0ad4880a-047e-4ea2-8a17-7c5d2e706adb","Type":"ContainerStarted","Data":"fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.804646 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7cd58688-jtgqv" event={"ID":"0ad4880a-047e-4ea2-8a17-7c5d2e706adb","Type":"ContainerStarted","Data":"beedab5bc7262e435f5d7c970afbfa24a4d7d117bf473c89b955d2e9703ad218"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.805261 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.808899 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" event={"ID":"d29dfd27-459d-4ade-8119-3c84095d0b1b","Type":"ContainerStarted","Data":"1eb848546f13708da29a1b1e0adc2d1fb9e0b24f303d6da0cc02984c629faa86"} Nov 28 07:08:35 crc kubenswrapper[4889]: I1128 07:08:35.837853 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b7cd58688-jtgqv" podStartSLOduration=2.837836909 podStartE2EDuration="2.837836909s" podCreationTimestamp="2025-11-28 07:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:35.82789239 +0000 UTC m=+1238.798126555" watchObservedRunningTime="2025-11-28 07:08:35.837836909 +0000 UTC m=+1238.808071064" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:35.989312 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8ccb5c7cf-k9n2l"] Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.156149 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.269837 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-nb\") pod \"36bf3778-0496-4590-a2fd-18516ee7c057\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.270265 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-config\") pod \"36bf3778-0496-4590-a2fd-18516ee7c057\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.270366 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-swift-storage-0\") pod \"36bf3778-0496-4590-a2fd-18516ee7c057\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.270416 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-svc\") pod \"36bf3778-0496-4590-a2fd-18516ee7c057\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.270471 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-sb\") pod \"36bf3778-0496-4590-a2fd-18516ee7c057\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.270535 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf2xl\" (UniqueName: \"kubernetes.io/projected/36bf3778-0496-4590-a2fd-18516ee7c057-kube-api-access-gf2xl\") pod \"36bf3778-0496-4590-a2fd-18516ee7c057\" (UID: \"36bf3778-0496-4590-a2fd-18516ee7c057\") " Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.279213 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bf3778-0496-4590-a2fd-18516ee7c057-kube-api-access-gf2xl" (OuterVolumeSpecName: "kube-api-access-gf2xl") pod "36bf3778-0496-4590-a2fd-18516ee7c057" (UID: "36bf3778-0496-4590-a2fd-18516ee7c057"). InnerVolumeSpecName "kube-api-access-gf2xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.316053 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-config" (OuterVolumeSpecName: "config") pod "36bf3778-0496-4590-a2fd-18516ee7c057" (UID: "36bf3778-0496-4590-a2fd-18516ee7c057"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.319445 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36bf3778-0496-4590-a2fd-18516ee7c057" (UID: "36bf3778-0496-4590-a2fd-18516ee7c057"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.320942 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36bf3778-0496-4590-a2fd-18516ee7c057" (UID: "36bf3778-0496-4590-a2fd-18516ee7c057"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.323606 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36bf3778-0496-4590-a2fd-18516ee7c057" (UID: "36bf3778-0496-4590-a2fd-18516ee7c057"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.325039 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "36bf3778-0496-4590-a2fd-18516ee7c057" (UID: "36bf3778-0496-4590-a2fd-18516ee7c057"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.364280 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.373241 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.373270 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.373282 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf2xl\" (UniqueName: \"kubernetes.io/projected/36bf3778-0496-4590-a2fd-18516ee7c057-kube-api-access-gf2xl\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.373291 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.373300 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.373309 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bf3778-0496-4590-a2fd-18516ee7c057-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.823005 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" event={"ID":"36bf3778-0496-4590-a2fd-18516ee7c057","Type":"ContainerDied","Data":"92fd46b0c31d359e15e5a8a05f388fd56b54efddfb87a750246b847e37491db7"} Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.823015 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dfd64c45-jvlj5" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.823077 4889 scope.go:117] "RemoveContainer" containerID="214f2642b4dedc121f70b497b93aa298d7a119ae3effa28ad4bee465f87fbac6" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.828334 4889 generic.go:334] "Generic (PLEG): container finished" podID="ed8cab85-cee8-4604-9898-9215c05dbe9d" containerID="f324159977992549c1956657bb766a213267774090df227c5602d57d1df7efca" exitCode=0 Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.828426 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" event={"ID":"ed8cab85-cee8-4604-9898-9215c05dbe9d","Type":"ContainerDied","Data":"f324159977992549c1956657bb766a213267774090df227c5602d57d1df7efca"} Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.828461 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" event={"ID":"ed8cab85-cee8-4604-9898-9215c05dbe9d","Type":"ContainerStarted","Data":"c9a9e8c7fc53625804f41ebf30a00a1f5643bb7fed306837c240397e330dfce8"} Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.831489 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"688094e3-8cd7-49a1-944b-a3ec9f8a22ad","Type":"ContainerStarted","Data":"2ea1aa911922b75e42b63a1f1481d64f3dd3d88cb0fb149567a81390e2772c0d"} Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.831662 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.914621 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64dfd64c45-jvlj5"] Nov 28 07:08:36 crc kubenswrapper[4889]: I1128 07:08:36.923020 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64dfd64c45-jvlj5"] Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.356922 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bf3778-0496-4590-a2fd-18516ee7c057" path="/var/lib/kubelet/pods/36bf3778-0496-4590-a2fd-18516ee7c057/volumes" Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.844484 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"004d9422-ac6d-4fdd-bda8-4501b0a01358","Type":"ContainerStarted","Data":"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de"} Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.845347 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"004d9422-ac6d-4fdd-bda8-4501b0a01358","Type":"ContainerStarted","Data":"e9d25f441f679294e19f6362a3f8f69229afb04af1950754b369c3f0f344c9f6"} Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.847180 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" event={"ID":"ed8cab85-cee8-4604-9898-9215c05dbe9d","Type":"ContainerStarted","Data":"fcb4b482e76a139224c75dd763c6b8737558702df24d0451a0c3c91427c4ae02"} Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.848327 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.851441 4889 generic.go:334] "Generic (PLEG): container finished" podID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerID="9497167b757f1ed2baafd4074c4a1ff36ec970c47ae28055165b5a066f9555c0" exitCode=0 Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.851499 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerDied","Data":"9497167b757f1ed2baafd4074c4a1ff36ec970c47ae28055165b5a066f9555c0"} Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.860333 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" event={"ID":"d29dfd27-459d-4ade-8119-3c84095d0b1b","Type":"ContainerStarted","Data":"bc963ae674cb642cf73feedb96f166caf22e14565033105ea01efe00c81d6de0"} Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.860370 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" event={"ID":"d29dfd27-459d-4ade-8119-3c84095d0b1b","Type":"ContainerStarted","Data":"88a22234953fdf7b7113f5a16ffb14c7f8e9a5558572a79a29816025d55b2843"} Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.864081 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dcb6998f-sb4k2" event={"ID":"741842f5-b565-43c8-bd99-eb15782fcf18","Type":"ContainerStarted","Data":"ebd8b75f47303d72ac1c1453cd80c63707ba3cde640979a9031b535215433325"} Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.864159 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dcb6998f-sb4k2" event={"ID":"741842f5-b565-43c8-bd99-eb15782fcf18","Type":"ContainerStarted","Data":"8961c6c6cb72aa100a7094f71ba9f1994c37f8a3f3c96b49d31139ba2ab2efea"} Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.908213 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" podStartSLOduration=2.908192862 podStartE2EDuration="2.908192862s" podCreationTimestamp="2025-11-28 07:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:37.877697018 +0000 UTC m=+1240.847931193" watchObservedRunningTime="2025-11-28 07:08:37.908192862 +0000 UTC m=+1240.878427017" Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.975955 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-59dcb6998f-sb4k2" podStartSLOduration=2.699957942 podStartE2EDuration="4.975932772s" podCreationTimestamp="2025-11-28 07:08:33 +0000 UTC" firstStartedPulling="2025-11-28 07:08:34.780954316 +0000 UTC m=+1237.751188471" lastFinishedPulling="2025-11-28 07:08:37.056929146 +0000 UTC m=+1240.027163301" observedRunningTime="2025-11-28 07:08:37.916970443 +0000 UTC m=+1240.887204608" watchObservedRunningTime="2025-11-28 07:08:37.975932772 +0000 UTC m=+1240.946166927" Nov 28 07:08:37 crc kubenswrapper[4889]: I1128 07:08:37.981326 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" podStartSLOduration=2.769005893 podStartE2EDuration="4.981308301s" podCreationTimestamp="2025-11-28 07:08:33 +0000 UTC" firstStartedPulling="2025-11-28 07:08:34.843210504 +0000 UTC m=+1237.813444659" lastFinishedPulling="2025-11-28 07:08:37.055512912 +0000 UTC m=+1240.025747067" observedRunningTime="2025-11-28 07:08:37.944165967 +0000 UTC m=+1240.914400122" watchObservedRunningTime="2025-11-28 07:08:37.981308301 +0000 UTC m=+1240.951542456" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.032934 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.119115 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.125331 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-scripts\") pod \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.125375 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-log-httpd\") pod \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.125452 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-sg-core-conf-yaml\") pod \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.125472 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-run-httpd\") pod \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.125499 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-config-data\") pod \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.125562 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-combined-ca-bundle\") pod \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.125644 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7v2h\" (UniqueName: \"kubernetes.io/projected/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-kube-api-access-n7v2h\") pod \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\" (UID: \"4a9a3f68-4a72-4ae8-a285-c39b7438bef0\") " Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.126514 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4a9a3f68-4a72-4ae8-a285-c39b7438bef0" (UID: "4a9a3f68-4a72-4ae8-a285-c39b7438bef0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.126939 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4a9a3f68-4a72-4ae8-a285-c39b7438bef0" (UID: "4a9a3f68-4a72-4ae8-a285-c39b7438bef0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.129876 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-kube-api-access-n7v2h" (OuterVolumeSpecName: "kube-api-access-n7v2h") pod "4a9a3f68-4a72-4ae8-a285-c39b7438bef0" (UID: "4a9a3f68-4a72-4ae8-a285-c39b7438bef0"). InnerVolumeSpecName "kube-api-access-n7v2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.132812 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-scripts" (OuterVolumeSpecName: "scripts") pod "4a9a3f68-4a72-4ae8-a285-c39b7438bef0" (UID: "4a9a3f68-4a72-4ae8-a285-c39b7438bef0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.151124 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4a9a3f68-4a72-4ae8-a285-c39b7438bef0" (UID: "4a9a3f68-4a72-4ae8-a285-c39b7438bef0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.223728 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a9a3f68-4a72-4ae8-a285-c39b7438bef0" (UID: "4a9a3f68-4a72-4ae8-a285-c39b7438bef0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.227826 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7v2h\" (UniqueName: \"kubernetes.io/projected/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-kube-api-access-n7v2h\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.227862 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.227872 4889 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.227881 4889 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.227889 4889 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.227898 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.261021 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-config-data" (OuterVolumeSpecName: "config-data") pod "4a9a3f68-4a72-4ae8-a285-c39b7438bef0" (UID: "4a9a3f68-4a72-4ae8-a285-c39b7438bef0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.329497 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9a3f68-4a72-4ae8-a285-c39b7438bef0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.877776 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"688094e3-8cd7-49a1-944b-a3ec9f8a22ad","Type":"ContainerStarted","Data":"e048c0c3be6649ff06912bacb6f6161b8b52e75d88222958570a028361efa949"} Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.883948 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9a3f68-4a72-4ae8-a285-c39b7438bef0","Type":"ContainerDied","Data":"4f9e2df9cb3a549cfc1d0193450f32bccecf57514347f90799c39656ec4bbb8b"} Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.883991 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.884011 4889 scope.go:117] "RemoveContainer" containerID="99c5709a5fffdab99f7a8bb562d2116dd62a143f03718002ff7099453c0a4c28" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.925212 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.933919 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.951788 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:08:38 crc kubenswrapper[4889]: E1128 07:08:38.952273 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="ceilometer-notification-agent" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952293 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="ceilometer-notification-agent" Nov 28 07:08:38 crc kubenswrapper[4889]: E1128 07:08:38.952315 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="sg-core" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952325 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="sg-core" Nov 28 07:08:38 crc kubenswrapper[4889]: E1128 07:08:38.952344 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bf3778-0496-4590-a2fd-18516ee7c057" containerName="init" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952352 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bf3778-0496-4590-a2fd-18516ee7c057" containerName="init" Nov 28 07:08:38 crc kubenswrapper[4889]: E1128 07:08:38.952372 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="proxy-httpd" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952379 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="proxy-httpd" Nov 28 07:08:38 crc kubenswrapper[4889]: E1128 07:08:38.952403 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="ceilometer-central-agent" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952411 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="ceilometer-central-agent" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952619 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="sg-core" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952649 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bf3778-0496-4590-a2fd-18516ee7c057" containerName="init" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952667 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="ceilometer-central-agent" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952680 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="proxy-httpd" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.952694 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" containerName="ceilometer-notification-agent" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.961409 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.967255 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.988026 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:08:38 crc kubenswrapper[4889]: I1128 07:08:38.988256 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.009781 4889 scope.go:117] "RemoveContainer" containerID="32ca4733b23849d00eb4a8f9251b1f248f60fb9aa667e42fe8ec2eb636774cc9" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.031836 4889 scope.go:117] "RemoveContainer" containerID="9497167b757f1ed2baafd4074c4a1ff36ec970c47ae28055165b5a066f9555c0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.044449 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfbf\" (UniqueName: \"kubernetes.io/projected/05b0ced3-9aef-4c0e-bae5-44573e094a49-kube-api-access-fnfbf\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.044681 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-scripts\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.044812 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-run-httpd\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.044916 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-config-data\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.045058 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.045226 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.045324 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-log-httpd\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.047682 4889 scope.go:117] "RemoveContainer" containerID="0984b6f70d98fe9863232af5cccbd01580ad25285a24a1a8065176170211d4d0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.147382 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.147692 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-log-httpd\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.147836 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnfbf\" (UniqueName: \"kubernetes.io/projected/05b0ced3-9aef-4c0e-bae5-44573e094a49-kube-api-access-fnfbf\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.147994 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-scripts\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.148126 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-run-httpd\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.148235 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-config-data\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.148368 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.148966 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-log-httpd\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.149232 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-run-httpd\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.165461 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.166804 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.167382 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-config-data\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.175347 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-scripts\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.178208 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnfbf\" (UniqueName: \"kubernetes.io/projected/05b0ced3-9aef-4c0e-bae5-44573e094a49-kube-api-access-fnfbf\") pod \"ceilometer-0\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.291562 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7fd84fdbd8-ztpds"] Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.293229 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.297191 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.297309 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.303099 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fd84fdbd8-ztpds"] Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.304918 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.348396 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9a3f68-4a72-4ae8-a285-c39b7438bef0" path="/var/lib/kubelet/pods/4a9a3f68-4a72-4ae8-a285-c39b7438bef0/volumes" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.370784 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-combined-ca-bundle\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.371025 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-internal-tls-certs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.371070 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.371171 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41bad87-7181-45c9-ad09-bf49b278416d-logs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.371267 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-public-tls-certs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.371318 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwpsz\" (UniqueName: \"kubernetes.io/projected/c41bad87-7181-45c9-ad09-bf49b278416d-kube-api-access-zwpsz\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.371381 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data-custom\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.473187 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-internal-tls-certs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.473507 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.473569 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41bad87-7181-45c9-ad09-bf49b278416d-logs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.473608 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-public-tls-certs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.473632 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwpsz\" (UniqueName: \"kubernetes.io/projected/c41bad87-7181-45c9-ad09-bf49b278416d-kube-api-access-zwpsz\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.473660 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data-custom\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.473722 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-combined-ca-bundle\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.480200 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41bad87-7181-45c9-ad09-bf49b278416d-logs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.484338 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-public-tls-certs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.484885 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-combined-ca-bundle\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.486314 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-internal-tls-certs\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.487886 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.488387 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data-custom\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.500290 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwpsz\" (UniqueName: \"kubernetes.io/projected/c41bad87-7181-45c9-ad09-bf49b278416d-kube-api-access-zwpsz\") pod \"barbican-api-7fd84fdbd8-ztpds\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.625262 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:39 crc kubenswrapper[4889]: W1128 07:08:39.643975 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05b0ced3_9aef_4c0e_bae5_44573e094a49.slice/crio-9b2c5d2e815f2647d5e3dc5e79773d3fa23f2bd6792835ab17283585c2d8209d WatchSource:0}: Error finding container 9b2c5d2e815f2647d5e3dc5e79773d3fa23f2bd6792835ab17283585c2d8209d: Status 404 returned error can't find the container with id 9b2c5d2e815f2647d5e3dc5e79773d3fa23f2bd6792835ab17283585c2d8209d Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.667301 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:08:39 crc kubenswrapper[4889]: I1128 07:08:39.892731 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerStarted","Data":"9b2c5d2e815f2647d5e3dc5e79773d3fa23f2bd6792835ab17283585c2d8209d"} Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.071164 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fd84fdbd8-ztpds"] Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.914904 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"004d9422-ac6d-4fdd-bda8-4501b0a01358","Type":"ContainerStarted","Data":"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb"} Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.915410 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.915119 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerName="cinder-api-log" containerID="cri-o://05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de" gracePeriod=30 Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.915415 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerName="cinder-api" containerID="cri-o://8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb" gracePeriod=30 Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.916327 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fd84fdbd8-ztpds" event={"ID":"c41bad87-7181-45c9-ad09-bf49b278416d","Type":"ContainerStarted","Data":"411c51ac4022ce773c6ca107021fdf0aa7e87825c86f41edfb9eef55abeb15ae"} Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.916359 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fd84fdbd8-ztpds" event={"ID":"c41bad87-7181-45c9-ad09-bf49b278416d","Type":"ContainerStarted","Data":"9fdc68e6e823526c2abd5a125cf23988589aa37b2d2343288d601ff0dae6381f"} Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.919502 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"688094e3-8cd7-49a1-944b-a3ec9f8a22ad","Type":"ContainerStarted","Data":"d2c6c7cb50eadb7c6c7c6457a701ce31b8358d43a28f6b4fcba8c8b4747384e3"} Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.942199 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.942167483 podStartE2EDuration="5.942167483s" podCreationTimestamp="2025-11-28 07:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:40.93789845 +0000 UTC m=+1243.908132615" watchObservedRunningTime="2025-11-28 07:08:40.942167483 +0000 UTC m=+1243.912401628" Nov 28 07:08:40 crc kubenswrapper[4889]: I1128 07:08:40.959169 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.483134812 podStartE2EDuration="6.959151742s" podCreationTimestamp="2025-11-28 07:08:34 +0000 UTC" firstStartedPulling="2025-11-28 07:08:35.77472299 +0000 UTC m=+1238.744957145" lastFinishedPulling="2025-11-28 07:08:37.25073992 +0000 UTC m=+1240.220974075" observedRunningTime="2025-11-28 07:08:40.958493766 +0000 UTC m=+1243.928727921" watchObservedRunningTime="2025-11-28 07:08:40.959151742 +0000 UTC m=+1243.929385897" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.634915 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.716481 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data-custom\") pod \"004d9422-ac6d-4fdd-bda8-4501b0a01358\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.716544 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-combined-ca-bundle\") pod \"004d9422-ac6d-4fdd-bda8-4501b0a01358\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.716583 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004d9422-ac6d-4fdd-bda8-4501b0a01358-logs\") pod \"004d9422-ac6d-4fdd-bda8-4501b0a01358\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.716626 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7sbg\" (UniqueName: \"kubernetes.io/projected/004d9422-ac6d-4fdd-bda8-4501b0a01358-kube-api-access-m7sbg\") pod \"004d9422-ac6d-4fdd-bda8-4501b0a01358\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.716694 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/004d9422-ac6d-4fdd-bda8-4501b0a01358-etc-machine-id\") pod \"004d9422-ac6d-4fdd-bda8-4501b0a01358\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.717353 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004d9422-ac6d-4fdd-bda8-4501b0a01358-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "004d9422-ac6d-4fdd-bda8-4501b0a01358" (UID: "004d9422-ac6d-4fdd-bda8-4501b0a01358"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.717535 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/004d9422-ac6d-4fdd-bda8-4501b0a01358-logs" (OuterVolumeSpecName: "logs") pod "004d9422-ac6d-4fdd-bda8-4501b0a01358" (UID: "004d9422-ac6d-4fdd-bda8-4501b0a01358"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.718261 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data\") pod \"004d9422-ac6d-4fdd-bda8-4501b0a01358\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.718538 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-scripts\") pod \"004d9422-ac6d-4fdd-bda8-4501b0a01358\" (UID: \"004d9422-ac6d-4fdd-bda8-4501b0a01358\") " Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.719421 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004d9422-ac6d-4fdd-bda8-4501b0a01358-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.719439 4889 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/004d9422-ac6d-4fdd-bda8-4501b0a01358-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.724035 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004d9422-ac6d-4fdd-bda8-4501b0a01358-kube-api-access-m7sbg" (OuterVolumeSpecName: "kube-api-access-m7sbg") pod "004d9422-ac6d-4fdd-bda8-4501b0a01358" (UID: "004d9422-ac6d-4fdd-bda8-4501b0a01358"). InnerVolumeSpecName "kube-api-access-m7sbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.735897 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "004d9422-ac6d-4fdd-bda8-4501b0a01358" (UID: "004d9422-ac6d-4fdd-bda8-4501b0a01358"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.739426 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-scripts" (OuterVolumeSpecName: "scripts") pod "004d9422-ac6d-4fdd-bda8-4501b0a01358" (UID: "004d9422-ac6d-4fdd-bda8-4501b0a01358"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.748166 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "004d9422-ac6d-4fdd-bda8-4501b0a01358" (UID: "004d9422-ac6d-4fdd-bda8-4501b0a01358"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.802340 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data" (OuterVolumeSpecName: "config-data") pod "004d9422-ac6d-4fdd-bda8-4501b0a01358" (UID: "004d9422-ac6d-4fdd-bda8-4501b0a01358"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.820961 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.821280 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.821291 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7sbg\" (UniqueName: \"kubernetes.io/projected/004d9422-ac6d-4fdd-bda8-4501b0a01358-kube-api-access-m7sbg\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.821301 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.821310 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004d9422-ac6d-4fdd-bda8-4501b0a01358-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.929770 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fd84fdbd8-ztpds" event={"ID":"c41bad87-7181-45c9-ad09-bf49b278416d","Type":"ContainerStarted","Data":"e680db750829bfe235068d372b958d1768e839b09f9e0ae52648fe5055964fda"} Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.930281 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.930324 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.931837 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerStarted","Data":"ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc"} Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.933599 4889 generic.go:334] "Generic (PLEG): container finished" podID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerID="8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb" exitCode=0 Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.933620 4889 generic.go:334] "Generic (PLEG): container finished" podID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerID="05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de" exitCode=143 Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.934113 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.935371 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"004d9422-ac6d-4fdd-bda8-4501b0a01358","Type":"ContainerDied","Data":"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb"} Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.935417 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"004d9422-ac6d-4fdd-bda8-4501b0a01358","Type":"ContainerDied","Data":"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de"} Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.935430 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"004d9422-ac6d-4fdd-bda8-4501b0a01358","Type":"ContainerDied","Data":"e9d25f441f679294e19f6362a3f8f69229afb04af1950754b369c3f0f344c9f6"} Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.935446 4889 scope.go:117] "RemoveContainer" containerID="8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.950992 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7fd84fdbd8-ztpds" podStartSLOduration=2.95097259 podStartE2EDuration="2.95097259s" podCreationTimestamp="2025-11-28 07:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:41.948815918 +0000 UTC m=+1244.919050073" watchObservedRunningTime="2025-11-28 07:08:41.95097259 +0000 UTC m=+1244.921206745" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.955414 4889 scope.go:117] "RemoveContainer" containerID="05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.990141 4889 scope.go:117] "RemoveContainer" containerID="8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb" Nov 28 07:08:41 crc kubenswrapper[4889]: E1128 07:08:41.993193 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb\": container with ID starting with 8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb not found: ID does not exist" containerID="8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.993241 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb"} err="failed to get container status \"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb\": rpc error: code = NotFound desc = could not find container \"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb\": container with ID starting with 8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb not found: ID does not exist" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.993296 4889 scope.go:117] "RemoveContainer" containerID="05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de" Nov 28 07:08:41 crc kubenswrapper[4889]: E1128 07:08:41.993726 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de\": container with ID starting with 05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de not found: ID does not exist" containerID="05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.993746 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de"} err="failed to get container status \"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de\": rpc error: code = NotFound desc = could not find container \"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de\": container with ID starting with 05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de not found: ID does not exist" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.993761 4889 scope.go:117] "RemoveContainer" containerID="8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.994161 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb"} err="failed to get container status \"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb\": rpc error: code = NotFound desc = could not find container \"8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb\": container with ID starting with 8967b31d2d9512a7320f958d7f38353d668f24e65d2b26a7b15a6eae70117efb not found: ID does not exist" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.994213 4889 scope.go:117] "RemoveContainer" containerID="05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de" Nov 28 07:08:41 crc kubenswrapper[4889]: I1128 07:08:41.994565 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de"} err="failed to get container status \"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de\": rpc error: code = NotFound desc = could not find container \"05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de\": container with ID starting with 05bde37eee4ea4f2389980ac1b60fb381850db6b1434dbe9c2e99135ddac69de not found: ID does not exist" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.013267 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.025799 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.032550 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:42 crc kubenswrapper[4889]: E1128 07:08:42.033065 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerName="cinder-api" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.033083 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerName="cinder-api" Nov 28 07:08:42 crc kubenswrapper[4889]: E1128 07:08:42.033101 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerName="cinder-api-log" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.033109 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerName="cinder-api-log" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.033412 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerName="cinder-api" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.033437 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" containerName="cinder-api-log" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.034440 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.035916 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.037421 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.037424 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.040813 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.127790 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.127867 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.127941 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.128120 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7209dbe-be81-47dd-9255-c2444debdaa9-logs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.128286 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgqnp\" (UniqueName: \"kubernetes.io/projected/c7209dbe-be81-47dd-9255-c2444debdaa9-kube-api-access-rgqnp\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.128386 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-scripts\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.128438 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.128493 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.128530 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7209dbe-be81-47dd-9255-c2444debdaa9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.230333 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.230425 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.230513 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.230578 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7209dbe-be81-47dd-9255-c2444debdaa9-logs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.230660 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgqnp\" (UniqueName: \"kubernetes.io/projected/c7209dbe-be81-47dd-9255-c2444debdaa9-kube-api-access-rgqnp\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.231043 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7209dbe-be81-47dd-9255-c2444debdaa9-logs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.231624 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-scripts\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.232376 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.232483 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.232534 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7209dbe-be81-47dd-9255-c2444debdaa9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.232649 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7209dbe-be81-47dd-9255-c2444debdaa9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.236863 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.237268 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-scripts\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.237540 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.239446 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.243224 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.245842 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.246989 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgqnp\" (UniqueName: \"kubernetes.io/projected/c7209dbe-be81-47dd-9255-c2444debdaa9-kube-api-access-rgqnp\") pod \"cinder-api-0\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.348308 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.802761 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:08:42 crc kubenswrapper[4889]: W1128 07:08:42.805568 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7209dbe_be81_47dd_9255_c2444debdaa9.slice/crio-656156a8a62738bcf9a70d7751c74ef4524ee3b586a58332890da05940888514 WatchSource:0}: Error finding container 656156a8a62738bcf9a70d7751c74ef4524ee3b586a58332890da05940888514: Status 404 returned error can't find the container with id 656156a8a62738bcf9a70d7751c74ef4524ee3b586a58332890da05940888514 Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.941672 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7209dbe-be81-47dd-9255-c2444debdaa9","Type":"ContainerStarted","Data":"656156a8a62738bcf9a70d7751c74ef4524ee3b586a58332890da05940888514"} Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.943511 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerStarted","Data":"b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c"} Nov 28 07:08:42 crc kubenswrapper[4889]: I1128 07:08:42.943535 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerStarted","Data":"85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51"} Nov 28 07:08:43 crc kubenswrapper[4889]: I1128 07:08:43.379076 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004d9422-ac6d-4fdd-bda8-4501b0a01358" path="/var/lib/kubelet/pods/004d9422-ac6d-4fdd-bda8-4501b0a01358/volumes" Nov 28 07:08:43 crc kubenswrapper[4889]: I1128 07:08:43.958655 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7209dbe-be81-47dd-9255-c2444debdaa9","Type":"ContainerStarted","Data":"0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04"} Nov 28 07:08:43 crc kubenswrapper[4889]: I1128 07:08:43.959007 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7209dbe-be81-47dd-9255-c2444debdaa9","Type":"ContainerStarted","Data":"5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d"} Nov 28 07:08:43 crc kubenswrapper[4889]: I1128 07:08:43.959798 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 07:08:43 crc kubenswrapper[4889]: I1128 07:08:43.989763 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.989743131 podStartE2EDuration="2.989743131s" podCreationTimestamp="2025-11-28 07:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:43.984910425 +0000 UTC m=+1246.955144580" watchObservedRunningTime="2025-11-28 07:08:43.989743131 +0000 UTC m=+1246.959977306" Nov 28 07:08:44 crc kubenswrapper[4889]: I1128 07:08:44.969776 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerStarted","Data":"983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e"} Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.008399 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.482920512 podStartE2EDuration="7.008376285s" podCreationTimestamp="2025-11-28 07:08:38 +0000 UTC" firstStartedPulling="2025-11-28 07:08:39.656830252 +0000 UTC m=+1242.627064407" lastFinishedPulling="2025-11-28 07:08:44.182286025 +0000 UTC m=+1247.152520180" observedRunningTime="2025-11-28 07:08:44.998335673 +0000 UTC m=+1247.968569838" watchObservedRunningTime="2025-11-28 07:08:45.008376285 +0000 UTC m=+1247.978610440" Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.292209 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.457265 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.562406 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f677dd449-zh7j5"] Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.562672 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" podUID="237dc8d4-ff16-46ab-a728-212825640012" containerName="dnsmasq-dns" containerID="cri-o://ac2380e4ed7c696e5a1f6ff73ec36b3f5e62b95b602f8e7c48f289056db38027" gracePeriod=10 Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.638041 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.648540 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.781274 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.983506 4889 generic.go:334] "Generic (PLEG): container finished" podID="237dc8d4-ff16-46ab-a728-212825640012" containerID="ac2380e4ed7c696e5a1f6ff73ec36b3f5e62b95b602f8e7c48f289056db38027" exitCode=0 Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.983693 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" event={"ID":"237dc8d4-ff16-46ab-a728-212825640012","Type":"ContainerDied","Data":"ac2380e4ed7c696e5a1f6ff73ec36b3f5e62b95b602f8e7c48f289056db38027"} Nov 28 07:08:45 crc kubenswrapper[4889]: I1128 07:08:45.983808 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.047564 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.217308 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.356381 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-sb\") pod \"237dc8d4-ff16-46ab-a728-212825640012\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.356499 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfvd5\" (UniqueName: \"kubernetes.io/projected/237dc8d4-ff16-46ab-a728-212825640012-kube-api-access-cfvd5\") pod \"237dc8d4-ff16-46ab-a728-212825640012\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.356590 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-nb\") pod \"237dc8d4-ff16-46ab-a728-212825640012\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.356614 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-swift-storage-0\") pod \"237dc8d4-ff16-46ab-a728-212825640012\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.356801 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-config\") pod \"237dc8d4-ff16-46ab-a728-212825640012\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.356925 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-svc\") pod \"237dc8d4-ff16-46ab-a728-212825640012\" (UID: \"237dc8d4-ff16-46ab-a728-212825640012\") " Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.363088 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237dc8d4-ff16-46ab-a728-212825640012-kube-api-access-cfvd5" (OuterVolumeSpecName: "kube-api-access-cfvd5") pod "237dc8d4-ff16-46ab-a728-212825640012" (UID: "237dc8d4-ff16-46ab-a728-212825640012"). InnerVolumeSpecName "kube-api-access-cfvd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.428478 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "237dc8d4-ff16-46ab-a728-212825640012" (UID: "237dc8d4-ff16-46ab-a728-212825640012"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.433344 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-config" (OuterVolumeSpecName: "config") pod "237dc8d4-ff16-46ab-a728-212825640012" (UID: "237dc8d4-ff16-46ab-a728-212825640012"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.436561 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "237dc8d4-ff16-46ab-a728-212825640012" (UID: "237dc8d4-ff16-46ab-a728-212825640012"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.439372 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "237dc8d4-ff16-46ab-a728-212825640012" (UID: "237dc8d4-ff16-46ab-a728-212825640012"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.459601 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.459629 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.459639 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.459648 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.459657 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfvd5\" (UniqueName: \"kubernetes.io/projected/237dc8d4-ff16-46ab-a728-212825640012-kube-api-access-cfvd5\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.477957 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "237dc8d4-ff16-46ab-a728-212825640012" (UID: "237dc8d4-ff16-46ab-a728-212825640012"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.561115 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/237dc8d4-ff16-46ab-a728-212825640012-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.995281 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerName="cinder-scheduler" containerID="cri-o://e048c0c3be6649ff06912bacb6f6161b8b52e75d88222958570a028361efa949" gracePeriod=30 Nov 28 07:08:46 crc kubenswrapper[4889]: I1128 07:08:46.995574 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" Nov 28 07:08:47 crc kubenswrapper[4889]: I1128 07:08:47.000899 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f677dd449-zh7j5" event={"ID":"237dc8d4-ff16-46ab-a728-212825640012","Type":"ContainerDied","Data":"d58bd89b3e96f8536d8844e6d3c8329a52284a1d0a0e5cbd354b8e1e2a019628"} Nov 28 07:08:47 crc kubenswrapper[4889]: I1128 07:08:47.000953 4889 scope.go:117] "RemoveContainer" containerID="ac2380e4ed7c696e5a1f6ff73ec36b3f5e62b95b602f8e7c48f289056db38027" Nov 28 07:08:47 crc kubenswrapper[4889]: I1128 07:08:47.001190 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerName="probe" containerID="cri-o://d2c6c7cb50eadb7c6c7c6457a701ce31b8358d43a28f6b4fcba8c8b4747384e3" gracePeriod=30 Nov 28 07:08:47 crc kubenswrapper[4889]: I1128 07:08:47.028511 4889 scope.go:117] "RemoveContainer" containerID="17567b5a3d3d57af093814d7190ae83ca8c790a8d08e6ba123758607964656f4" Nov 28 07:08:47 crc kubenswrapper[4889]: I1128 07:08:47.039928 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f677dd449-zh7j5"] Nov 28 07:08:47 crc kubenswrapper[4889]: I1128 07:08:47.053949 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f677dd449-zh7j5"] Nov 28 07:08:47 crc kubenswrapper[4889]: I1128 07:08:47.358697 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237dc8d4-ff16-46ab-a728-212825640012" path="/var/lib/kubelet/pods/237dc8d4-ff16-46ab-a728-212825640012/volumes" Nov 28 07:08:48 crc kubenswrapper[4889]: I1128 07:08:48.005507 4889 generic.go:334] "Generic (PLEG): container finished" podID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerID="d2c6c7cb50eadb7c6c7c6457a701ce31b8358d43a28f6b4fcba8c8b4747384e3" exitCode=0 Nov 28 07:08:48 crc kubenswrapper[4889]: I1128 07:08:48.005570 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"688094e3-8cd7-49a1-944b-a3ec9f8a22ad","Type":"ContainerDied","Data":"d2c6c7cb50eadb7c6c7c6457a701ce31b8358d43a28f6b4fcba8c8b4747384e3"} Nov 28 07:08:49 crc kubenswrapper[4889]: I1128 07:08:49.126534 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:49 crc kubenswrapper[4889]: I1128 07:08:49.181267 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.045124 4889 generic.go:334] "Generic (PLEG): container finished" podID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerID="e048c0c3be6649ff06912bacb6f6161b8b52e75d88222958570a028361efa949" exitCode=0 Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.045981 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"688094e3-8cd7-49a1-944b-a3ec9f8a22ad","Type":"ContainerDied","Data":"e048c0c3be6649ff06912bacb6f6161b8b52e75d88222958570a028361efa949"} Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.166797 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.225780 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-etc-machine-id\") pod \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.225896 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "688094e3-8cd7-49a1-944b-a3ec9f8a22ad" (UID: "688094e3-8cd7-49a1-944b-a3ec9f8a22ad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.225979 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data-custom\") pod \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.226061 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-combined-ca-bundle\") pod \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.226082 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-scripts\") pod \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.226159 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8gdf\" (UniqueName: \"kubernetes.io/projected/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-kube-api-access-c8gdf\") pod \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.226291 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data\") pod \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\" (UID: \"688094e3-8cd7-49a1-944b-a3ec9f8a22ad\") " Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.226900 4889 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.234107 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-scripts" (OuterVolumeSpecName: "scripts") pod "688094e3-8cd7-49a1-944b-a3ec9f8a22ad" (UID: "688094e3-8cd7-49a1-944b-a3ec9f8a22ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.235914 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "688094e3-8cd7-49a1-944b-a3ec9f8a22ad" (UID: "688094e3-8cd7-49a1-944b-a3ec9f8a22ad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.235975 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-kube-api-access-c8gdf" (OuterVolumeSpecName: "kube-api-access-c8gdf") pod "688094e3-8cd7-49a1-944b-a3ec9f8a22ad" (UID: "688094e3-8cd7-49a1-944b-a3ec9f8a22ad"). InnerVolumeSpecName "kube-api-access-c8gdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.281645 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "688094e3-8cd7-49a1-944b-a3ec9f8a22ad" (UID: "688094e3-8cd7-49a1-944b-a3ec9f8a22ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.328573 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.328607 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.328616 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8gdf\" (UniqueName: \"kubernetes.io/projected/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-kube-api-access-c8gdf\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.328626 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.332966 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data" (OuterVolumeSpecName: "config-data") pod "688094e3-8cd7-49a1-944b-a3ec9f8a22ad" (UID: "688094e3-8cd7-49a1-944b-a3ec9f8a22ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:50 crc kubenswrapper[4889]: I1128 07:08:50.430480 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/688094e3-8cd7-49a1-944b-a3ec9f8a22ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.056419 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"688094e3-8cd7-49a1-944b-a3ec9f8a22ad","Type":"ContainerDied","Data":"2ea1aa911922b75e42b63a1f1481d64f3dd3d88cb0fb149567a81390e2772c0d"} Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.056477 4889 scope.go:117] "RemoveContainer" containerID="d2c6c7cb50eadb7c6c7c6457a701ce31b8358d43a28f6b4fcba8c8b4747384e3" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.056624 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.063729 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.079648 4889 scope.go:117] "RemoveContainer" containerID="e048c0c3be6649ff06912bacb6f6161b8b52e75d88222958570a028361efa949" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.115092 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.126701 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.137230 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:51 crc kubenswrapper[4889]: E1128 07:08:51.145978 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerName="probe" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.146019 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerName="probe" Nov 28 07:08:51 crc kubenswrapper[4889]: E1128 07:08:51.146258 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerName="cinder-scheduler" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.146368 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerName="cinder-scheduler" Nov 28 07:08:51 crc kubenswrapper[4889]: E1128 07:08:51.146402 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237dc8d4-ff16-46ab-a728-212825640012" containerName="init" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.146411 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="237dc8d4-ff16-46ab-a728-212825640012" containerName="init" Nov 28 07:08:51 crc kubenswrapper[4889]: E1128 07:08:51.155839 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237dc8d4-ff16-46ab-a728-212825640012" containerName="dnsmasq-dns" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.155882 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="237dc8d4-ff16-46ab-a728-212825640012" containerName="dnsmasq-dns" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.156687 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerName="probe" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.156741 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" containerName="cinder-scheduler" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.156770 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="237dc8d4-ff16-46ab-a728-212825640012" containerName="dnsmasq-dns" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.171903 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.172051 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.172277 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.174798 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.248648 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-scripts\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.248806 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.248863 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.248972 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5btw\" (UniqueName: \"kubernetes.io/projected/f91eac1f-c699-4e53-9ff8-e8326bf4e185-kube-api-access-j5btw\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.249045 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91eac1f-c699-4e53-9ff8-e8326bf4e185-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.249114 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.252088 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b7cd58688-jtgqv"] Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.252307 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b7cd58688-jtgqv" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api-log" containerID="cri-o://fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7" gracePeriod=30 Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.252452 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b7cd58688-jtgqv" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api" containerID="cri-o://931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4" gracePeriod=30 Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.346525 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688094e3-8cd7-49a1-944b-a3ec9f8a22ad" path="/var/lib/kubelet/pods/688094e3-8cd7-49a1-944b-a3ec9f8a22ad/volumes" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.351272 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91eac1f-c699-4e53-9ff8-e8326bf4e185-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.351524 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.351700 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-scripts\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.351896 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.352045 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.352271 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5btw\" (UniqueName: \"kubernetes.io/projected/f91eac1f-c699-4e53-9ff8-e8326bf4e185-kube-api-access-j5btw\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.351358 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91eac1f-c699-4e53-9ff8-e8326bf4e185-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.357658 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.358475 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.359809 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.367332 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-scripts\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.396151 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5btw\" (UniqueName: \"kubernetes.io/projected/f91eac1f-c699-4e53-9ff8-e8326bf4e185-kube-api-access-j5btw\") pod \"cinder-scheduler-0\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.492510 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:08:51 crc kubenswrapper[4889]: I1128 07:08:51.544215 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:52 crc kubenswrapper[4889]: I1128 07:08:52.195098 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:52 crc kubenswrapper[4889]: I1128 07:08:52.745504 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:08:53 crc kubenswrapper[4889]: I1128 07:08:53.090656 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91eac1f-c699-4e53-9ff8-e8326bf4e185","Type":"ContainerStarted","Data":"a3d1fac957c68dd8ee23ae095f85116ed1ba8797060f0ece64af38a91a7a1cce"} Nov 28 07:08:53 crc kubenswrapper[4889]: I1128 07:08:53.093371 4889 generic.go:334] "Generic (PLEG): container finished" podID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerID="fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7" exitCode=143 Nov 28 07:08:53 crc kubenswrapper[4889]: I1128 07:08:53.093407 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7cd58688-jtgqv" event={"ID":"0ad4880a-047e-4ea2-8a17-7c5d2e706adb","Type":"ContainerDied","Data":"fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7"} Nov 28 07:08:53 crc kubenswrapper[4889]: I1128 07:08:53.942171 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:08:54 crc kubenswrapper[4889]: I1128 07:08:54.110910 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91eac1f-c699-4e53-9ff8-e8326bf4e185","Type":"ContainerStarted","Data":"f75c2d3e942c4126e39bbb3c030aefb4924d2c9473c25ea74d8b3c3218308e58"} Nov 28 07:08:54 crc kubenswrapper[4889]: I1128 07:08:54.110964 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91eac1f-c699-4e53-9ff8-e8326bf4e185","Type":"ContainerStarted","Data":"49e67bb5951ea35d2e035af45fe412854503885e3be636b75ae99068b967486a"} Nov 28 07:08:54 crc kubenswrapper[4889]: I1128 07:08:54.134238 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.134219614 podStartE2EDuration="3.134219614s" podCreationTimestamp="2025-11-28 07:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:54.128437505 +0000 UTC m=+1257.098671670" watchObservedRunningTime="2025-11-28 07:08:54.134219614 +0000 UTC m=+1257.104453769" Nov 28 07:08:54 crc kubenswrapper[4889]: I1128 07:08:54.421679 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b7cd58688-jtgqv" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:57902->10.217.0.155:9311: read: connection reset by peer" Nov 28 07:08:54 crc kubenswrapper[4889]: I1128 07:08:54.421693 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b7cd58688-jtgqv" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:57908->10.217.0.155:9311: read: connection reset by peer" Nov 28 07:08:54 crc kubenswrapper[4889]: I1128 07:08:54.635728 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 07:08:54 crc kubenswrapper[4889]: I1128 07:08:54.916357 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:54 crc kubenswrapper[4889]: I1128 07:08:54.944599 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.019757 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85bffcf884-2hbfs"] Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.019972 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85bffcf884-2hbfs" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" containerName="neutron-api" containerID="cri-o://dfa5003630dcf04869c1dc354bcd218515a456f185def3d88418656c8e6216cd" gracePeriod=30 Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.020424 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85bffcf884-2hbfs" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" containerName="neutron-httpd" containerID="cri-o://c0d210825e05d6b60ab162be789d8a9af6b29bc796f54dbf9281d496f2b52198" gracePeriod=30 Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.048547 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data-custom\") pod \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.048691 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-combined-ca-bundle\") pod \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.048777 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-kube-api-access-xf4kh\") pod \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.048832 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data\") pod \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.048859 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-logs\") pod \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\" (UID: \"0ad4880a-047e-4ea2-8a17-7c5d2e706adb\") " Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.050947 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-logs" (OuterVolumeSpecName: "logs") pod "0ad4880a-047e-4ea2-8a17-7c5d2e706adb" (UID: "0ad4880a-047e-4ea2-8a17-7c5d2e706adb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.060114 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0ad4880a-047e-4ea2-8a17-7c5d2e706adb" (UID: "0ad4880a-047e-4ea2-8a17-7c5d2e706adb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.090415 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-kube-api-access-xf4kh" (OuterVolumeSpecName: "kube-api-access-xf4kh") pod "0ad4880a-047e-4ea2-8a17-7c5d2e706adb" (UID: "0ad4880a-047e-4ea2-8a17-7c5d2e706adb"). InnerVolumeSpecName "kube-api-access-xf4kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.101036 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ad4880a-047e-4ea2-8a17-7c5d2e706adb" (UID: "0ad4880a-047e-4ea2-8a17-7c5d2e706adb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.139905 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data" (OuterVolumeSpecName: "config-data") pod "0ad4880a-047e-4ea2-8a17-7c5d2e706adb" (UID: "0ad4880a-047e-4ea2-8a17-7c5d2e706adb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.144161 4889 generic.go:334] "Generic (PLEG): container finished" podID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerID="931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4" exitCode=0 Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.144446 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7cd58688-jtgqv" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.144553 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7cd58688-jtgqv" event={"ID":"0ad4880a-047e-4ea2-8a17-7c5d2e706adb","Type":"ContainerDied","Data":"931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4"} Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.144583 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7cd58688-jtgqv" event={"ID":"0ad4880a-047e-4ea2-8a17-7c5d2e706adb","Type":"ContainerDied","Data":"beedab5bc7262e435f5d7c970afbfa24a4d7d117bf473c89b955d2e9703ad218"} Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.144600 4889 scope.go:117] "RemoveContainer" containerID="931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.150569 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.151592 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.151663 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.151769 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.151824 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/0ad4880a-047e-4ea2-8a17-7c5d2e706adb-kube-api-access-xf4kh\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.187157 4889 scope.go:117] "RemoveContainer" containerID="fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.218620 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b7cd58688-jtgqv"] Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.234527 4889 scope.go:117] "RemoveContainer" containerID="931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.236834 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b7cd58688-jtgqv"] Nov 28 07:08:55 crc kubenswrapper[4889]: E1128 07:08:55.237438 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4\": container with ID starting with 931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4 not found: ID does not exist" containerID="931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.237494 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4"} err="failed to get container status \"931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4\": rpc error: code = NotFound desc = could not find container \"931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4\": container with ID starting with 931d857bcb7750bb36079c9fda24516560c224b5040b0130aebc016d2fc511b4 not found: ID does not exist" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.237528 4889 scope.go:117] "RemoveContainer" containerID="fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7" Nov 28 07:08:55 crc kubenswrapper[4889]: E1128 07:08:55.237978 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7\": container with ID starting with fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7 not found: ID does not exist" containerID="fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.238226 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7"} err="failed to get container status \"fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7\": rpc error: code = NotFound desc = could not find container \"fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7\": container with ID starting with fea348e2ef1af93967dd00d5a691fb111057c73016acacd5d1995babb926f4f7 not found: ID does not exist" Nov 28 07:08:55 crc kubenswrapper[4889]: I1128 07:08:55.346606 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" path="/var/lib/kubelet/pods/0ad4880a-047e-4ea2-8a17-7c5d2e706adb/volumes" Nov 28 07:08:56 crc kubenswrapper[4889]: I1128 07:08:56.156125 4889 generic.go:334] "Generic (PLEG): container finished" podID="acdfb982-66e1-4791-b46a-e6c12765560d" containerID="c0d210825e05d6b60ab162be789d8a9af6b29bc796f54dbf9281d496f2b52198" exitCode=0 Nov 28 07:08:56 crc kubenswrapper[4889]: I1128 07:08:56.156165 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bffcf884-2hbfs" event={"ID":"acdfb982-66e1-4791-b46a-e6c12765560d","Type":"ContainerDied","Data":"c0d210825e05d6b60ab162be789d8a9af6b29bc796f54dbf9281d496f2b52198"} Nov 28 07:08:56 crc kubenswrapper[4889]: I1128 07:08:56.493469 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.569163 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6s8jb"] Nov 28 07:08:58 crc kubenswrapper[4889]: E1128 07:08:58.569870 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.569883 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api" Nov 28 07:08:58 crc kubenswrapper[4889]: E1128 07:08:58.569922 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api-log" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.569928 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api-log" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.570102 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.570120 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad4880a-047e-4ea2-8a17-7c5d2e706adb" containerName="barbican-api-log" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.570737 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.581751 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6s8jb"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.660168 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kdfkg"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.663331 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.670448 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kdfkg"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.711723 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldxql\" (UniqueName: \"kubernetes.io/projected/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-kube-api-access-ldxql\") pod \"nova-api-db-create-6s8jb\" (UID: \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\") " pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.711768 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-operator-scripts\") pod \"nova-api-db-create-6s8jb\" (UID: \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\") " pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.740989 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-548d6bf557-pbtfl"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.746551 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.755513 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.755859 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.756083 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.773542 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-548d6bf557-pbtfl"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.781974 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.782022 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.782060 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.782741 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b371f61ff4e58e3c8a1cc2889d70d7351a69170427032ddc9f014086d459fb3"} pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.782812 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" containerID="cri-o://5b371f61ff4e58e3c8a1cc2889d70d7351a69170427032ddc9f014086d459fb3" gracePeriod=600 Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.795161 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-670d-account-create-update-cxn84"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.796350 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.799680 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813489 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-combined-ca-bundle\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813551 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv9jj\" (UniqueName: \"kubernetes.io/projected/e7e88a1b-3b19-46b8-b880-7b342640a8f2-kube-api-access-gv9jj\") pod \"nova-cell0-db-create-kdfkg\" (UID: \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\") " pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813586 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-log-httpd\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813636 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldxql\" (UniqueName: \"kubernetes.io/projected/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-kube-api-access-ldxql\") pod \"nova-api-db-create-6s8jb\" (UID: \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\") " pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813653 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-operator-scripts\") pod \"nova-api-db-create-6s8jb\" (UID: \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\") " pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813724 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-public-tls-certs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813747 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-config-data\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813761 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-run-httpd\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813794 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e88a1b-3b19-46b8-b880-7b342640a8f2-operator-scripts\") pod \"nova-cell0-db-create-kdfkg\" (UID: \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\") " pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813814 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7gs\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-kube-api-access-bk7gs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813843 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-etc-swift\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.813886 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-internal-tls-certs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.814557 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-operator-scripts\") pod \"nova-api-db-create-6s8jb\" (UID: \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\") " pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.818152 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-670d-account-create-update-cxn84"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.850327 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldxql\" (UniqueName: \"kubernetes.io/projected/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-kube-api-access-ldxql\") pod \"nova-api-db-create-6s8jb\" (UID: \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\") " pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.863541 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-tzkfc"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.865750 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.884884 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tzkfc"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.886082 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915225 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-internal-tls-certs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915261 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-combined-ca-bundle\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915287 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv9jj\" (UniqueName: \"kubernetes.io/projected/e7e88a1b-3b19-46b8-b880-7b342640a8f2-kube-api-access-gv9jj\") pod \"nova-cell0-db-create-kdfkg\" (UID: \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\") " pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915523 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-log-httpd\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915599 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f66c57-362a-437d-a4b4-e2c3bf045890-operator-scripts\") pod \"nova-api-670d-account-create-update-cxn84\" (UID: \"c5f66c57-362a-437d-a4b4-e2c3bf045890\") " pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915684 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-public-tls-certs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915736 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5q76\" (UniqueName: \"kubernetes.io/projected/c5f66c57-362a-437d-a4b4-e2c3bf045890-kube-api-access-l5q76\") pod \"nova-api-670d-account-create-update-cxn84\" (UID: \"c5f66c57-362a-437d-a4b4-e2c3bf045890\") " pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915757 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-config-data\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915773 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-run-httpd\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915834 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e88a1b-3b19-46b8-b880-7b342640a8f2-operator-scripts\") pod \"nova-cell0-db-create-kdfkg\" (UID: \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\") " pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915851 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk7gs\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-kube-api-access-bk7gs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.915896 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-etc-swift\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.917174 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-run-httpd\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.918303 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e88a1b-3b19-46b8-b880-7b342640a8f2-operator-scripts\") pod \"nova-cell0-db-create-kdfkg\" (UID: \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\") " pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.919189 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-log-httpd\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.919698 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-public-tls-certs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.921919 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-etc-swift\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.923421 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-combined-ca-bundle\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.927772 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-internal-tls-certs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.953510 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk7gs\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-kube-api-access-bk7gs\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.954485 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-config-data\") pod \"swift-proxy-548d6bf557-pbtfl\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.955475 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv9jj\" (UniqueName: \"kubernetes.io/projected/e7e88a1b-3b19-46b8-b880-7b342640a8f2-kube-api-access-gv9jj\") pod \"nova-cell0-db-create-kdfkg\" (UID: \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\") " pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.961877 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.963227 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.969937 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.970135 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.970434 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wlfnj" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.977070 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:08:58 crc kubenswrapper[4889]: I1128 07:08:58.997867 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.018132 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f66c57-362a-437d-a4b4-e2c3bf045890-operator-scripts\") pod \"nova-api-670d-account-create-update-cxn84\" (UID: \"c5f66c57-362a-437d-a4b4-e2c3bf045890\") " pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.018185 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.018216 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.018244 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gw62\" (UniqueName: \"kubernetes.io/projected/785a729e-1203-4d90-9bc1-447968cd6ffa-kube-api-access-2gw62\") pod \"nova-cell1-db-create-tzkfc\" (UID: \"785a729e-1203-4d90-9bc1-447968cd6ffa\") " pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.018266 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ct5m\" (UniqueName: \"kubernetes.io/projected/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-kube-api-access-5ct5m\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.018299 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5q76\" (UniqueName: \"kubernetes.io/projected/c5f66c57-362a-437d-a4b4-e2c3bf045890-kube-api-access-l5q76\") pod \"nova-api-670d-account-create-update-cxn84\" (UID: \"c5f66c57-362a-437d-a4b4-e2c3bf045890\") " pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.018318 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.018379 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785a729e-1203-4d90-9bc1-447968cd6ffa-operator-scripts\") pod \"nova-cell1-db-create-tzkfc\" (UID: \"785a729e-1203-4d90-9bc1-447968cd6ffa\") " pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.019152 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f66c57-362a-437d-a4b4-e2c3bf045890-operator-scripts\") pod \"nova-api-670d-account-create-update-cxn84\" (UID: \"c5f66c57-362a-437d-a4b4-e2c3bf045890\") " pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.026261 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3d10-account-create-update-r6pwt"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.027878 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.050296 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3d10-account-create-update-r6pwt"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.056022 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.073263 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.099039 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5q76\" (UniqueName: \"kubernetes.io/projected/c5f66c57-362a-437d-a4b4-e2c3bf045890-kube-api-access-l5q76\") pod \"nova-api-670d-account-create-update-cxn84\" (UID: \"c5f66c57-362a-437d-a4b4-e2c3bf045890\") " pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.117276 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.119570 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.119603 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.119632 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8952181-2de9-4a32-8c71-49e044d03333-operator-scripts\") pod \"nova-cell0-3d10-account-create-update-r6pwt\" (UID: \"c8952181-2de9-4a32-8c71-49e044d03333\") " pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.119655 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gw62\" (UniqueName: \"kubernetes.io/projected/785a729e-1203-4d90-9bc1-447968cd6ffa-kube-api-access-2gw62\") pod \"nova-cell1-db-create-tzkfc\" (UID: \"785a729e-1203-4d90-9bc1-447968cd6ffa\") " pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.119672 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ct5m\" (UniqueName: \"kubernetes.io/projected/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-kube-api-access-5ct5m\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.119722 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.120029 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvbn\" (UniqueName: \"kubernetes.io/projected/c8952181-2de9-4a32-8c71-49e044d03333-kube-api-access-xvvbn\") pod \"nova-cell0-3d10-account-create-update-r6pwt\" (UID: \"c8952181-2de9-4a32-8c71-49e044d03333\") " pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.120284 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785a729e-1203-4d90-9bc1-447968cd6ffa-operator-scripts\") pod \"nova-cell1-db-create-tzkfc\" (UID: \"785a729e-1203-4d90-9bc1-447968cd6ffa\") " pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.121128 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785a729e-1203-4d90-9bc1-447968cd6ffa-operator-scripts\") pod \"nova-cell1-db-create-tzkfc\" (UID: \"785a729e-1203-4d90-9bc1-447968cd6ffa\") " pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.122606 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.127613 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.132299 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.143327 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gw62\" (UniqueName: \"kubernetes.io/projected/785a729e-1203-4d90-9bc1-447968cd6ffa-kube-api-access-2gw62\") pod \"nova-cell1-db-create-tzkfc\" (UID: \"785a729e-1203-4d90-9bc1-447968cd6ffa\") " pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.146795 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ct5m\" (UniqueName: \"kubernetes.io/projected/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-kube-api-access-5ct5m\") pod \"openstackclient\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.222186 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvbn\" (UniqueName: \"kubernetes.io/projected/c8952181-2de9-4a32-8c71-49e044d03333-kube-api-access-xvvbn\") pod \"nova-cell0-3d10-account-create-update-r6pwt\" (UID: \"c8952181-2de9-4a32-8c71-49e044d03333\") " pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.222339 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8952181-2de9-4a32-8c71-49e044d03333-operator-scripts\") pod \"nova-cell0-3d10-account-create-update-r6pwt\" (UID: \"c8952181-2de9-4a32-8c71-49e044d03333\") " pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.224472 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8952181-2de9-4a32-8c71-49e044d03333-operator-scripts\") pod \"nova-cell0-3d10-account-create-update-r6pwt\" (UID: \"c8952181-2de9-4a32-8c71-49e044d03333\") " pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.233793 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6403-account-create-update-lm2b6"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.235003 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.244372 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.249662 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvbn\" (UniqueName: \"kubernetes.io/projected/c8952181-2de9-4a32-8c71-49e044d03333-kube-api-access-xvvbn\") pod \"nova-cell0-3d10-account-create-update-r6pwt\" (UID: \"c8952181-2de9-4a32-8c71-49e044d03333\") " pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.272508 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerDied","Data":"5b371f61ff4e58e3c8a1cc2889d70d7351a69170427032ddc9f014086d459fb3"} Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.272741 4889 scope.go:117] "RemoveContainer" containerID="8bcf61faea8df3b4bedcdbe66375ffe429928fd4ff7747468313822736645149" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.269761 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerID="5b371f61ff4e58e3c8a1cc2889d70d7351a69170427032ddc9f014086d459fb3" exitCode=0 Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.279840 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6403-account-create-update-lm2b6"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.292421 4889 generic.go:334] "Generic (PLEG): container finished" podID="acdfb982-66e1-4791-b46a-e6c12765560d" containerID="dfa5003630dcf04869c1dc354bcd218515a456f185def3d88418656c8e6216cd" exitCode=0 Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.292471 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bffcf884-2hbfs" event={"ID":"acdfb982-66e1-4791-b46a-e6c12765560d","Type":"ContainerDied","Data":"dfa5003630dcf04869c1dc354bcd218515a456f185def3d88418656c8e6216cd"} Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.323431 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57fa70ad-b991-42d2-8d9c-53b9d19e6045-operator-scripts\") pod \"nova-cell1-6403-account-create-update-lm2b6\" (UID: \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\") " pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.323723 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46hh\" (UniqueName: \"kubernetes.io/projected/57fa70ad-b991-42d2-8d9c-53b9d19e6045-kube-api-access-j46hh\") pod \"nova-cell1-6403-account-create-update-lm2b6\" (UID: \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\") " pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.429995 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57fa70ad-b991-42d2-8d9c-53b9d19e6045-operator-scripts\") pod \"nova-cell1-6403-account-create-update-lm2b6\" (UID: \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\") " pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.430057 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46hh\" (UniqueName: \"kubernetes.io/projected/57fa70ad-b991-42d2-8d9c-53b9d19e6045-kube-api-access-j46hh\") pod \"nova-cell1-6403-account-create-update-lm2b6\" (UID: \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\") " pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.431855 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57fa70ad-b991-42d2-8d9c-53b9d19e6045-operator-scripts\") pod \"nova-cell1-6403-account-create-update-lm2b6\" (UID: \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\") " pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.432815 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.445205 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.450317 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46hh\" (UniqueName: \"kubernetes.io/projected/57fa70ad-b991-42d2-8d9c-53b9d19e6045-kube-api-access-j46hh\") pod \"nova-cell1-6403-account-create-update-lm2b6\" (UID: \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\") " pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.474490 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.565349 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.721975 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6s8jb"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.804297 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kdfkg"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.813350 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.906007 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-670d-account-create-update-cxn84"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.969535 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-combined-ca-bundle\") pod \"acdfb982-66e1-4791-b46a-e6c12765560d\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.969734 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-httpd-config\") pod \"acdfb982-66e1-4791-b46a-e6c12765560d\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.969813 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ddlz\" (UniqueName: \"kubernetes.io/projected/acdfb982-66e1-4791-b46a-e6c12765560d-kube-api-access-4ddlz\") pod \"acdfb982-66e1-4791-b46a-e6c12765560d\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.969888 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-ovndb-tls-certs\") pod \"acdfb982-66e1-4791-b46a-e6c12765560d\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.969955 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-config\") pod \"acdfb982-66e1-4791-b46a-e6c12765560d\" (UID: \"acdfb982-66e1-4791-b46a-e6c12765560d\") " Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.972865 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-548d6bf557-pbtfl"] Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.976435 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "acdfb982-66e1-4791-b46a-e6c12765560d" (UID: "acdfb982-66e1-4791-b46a-e6c12765560d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:59 crc kubenswrapper[4889]: I1128 07:08:59.993952 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acdfb982-66e1-4791-b46a-e6c12765560d-kube-api-access-4ddlz" (OuterVolumeSpecName: "kube-api-access-4ddlz") pod "acdfb982-66e1-4791-b46a-e6c12765560d" (UID: "acdfb982-66e1-4791-b46a-e6c12765560d"). InnerVolumeSpecName "kube-api-access-4ddlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.053785 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-config" (OuterVolumeSpecName: "config") pod "acdfb982-66e1-4791-b46a-e6c12765560d" (UID: "acdfb982-66e1-4791-b46a-e6c12765560d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.056310 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acdfb982-66e1-4791-b46a-e6c12765560d" (UID: "acdfb982-66e1-4791-b46a-e6c12765560d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.071676 4889 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.071715 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ddlz\" (UniqueName: \"kubernetes.io/projected/acdfb982-66e1-4791-b46a-e6c12765560d-kube-api-access-4ddlz\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.071737 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.071746 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.139006 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "acdfb982-66e1-4791-b46a-e6c12765560d" (UID: "acdfb982-66e1-4791-b46a-e6c12765560d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.153809 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tzkfc"] Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.173199 4889 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdfb982-66e1-4791-b46a-e6c12765560d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.199552 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.275560 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3d10-account-create-update-r6pwt"] Nov 28 07:09:00 crc kubenswrapper[4889]: W1128 07:09:00.278110 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8952181_2de9_4a32_8c71_49e044d03333.slice/crio-e8f34ff13d5c11f5bc22734c42c4862572e4e5770f8ced9e5dc91a3b75f47586 WatchSource:0}: Error finding container e8f34ff13d5c11f5bc22734c42c4862572e4e5770f8ced9e5dc91a3b75f47586: Status 404 returned error can't find the container with id e8f34ff13d5c11f5bc22734c42c4862572e4e5770f8ced9e5dc91a3b75f47586 Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.331560 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-670d-account-create-update-cxn84" event={"ID":"c5f66c57-362a-437d-a4b4-e2c3bf045890","Type":"ContainerStarted","Data":"cc02df9dd41cbbd4e51054169827a03deb88d838c9bdef4962c059f73afe61b1"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.331610 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-670d-account-create-update-cxn84" event={"ID":"c5f66c57-362a-437d-a4b4-e2c3bf045890","Type":"ContainerStarted","Data":"0df631d42eda24204ea1f47eb9257329915ba38eac62aef2272b5b7c56a656a4"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.347958 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-670d-account-create-update-cxn84" podStartSLOduration=2.347941695 podStartE2EDuration="2.347941695s" podCreationTimestamp="2025-11-28 07:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:00.346249444 +0000 UTC m=+1263.316483599" watchObservedRunningTime="2025-11-28 07:09:00.347941695 +0000 UTC m=+1263.318175850" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.357187 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kdfkg" event={"ID":"e7e88a1b-3b19-46b8-b880-7b342640a8f2","Type":"ContainerStarted","Data":"75ed159e60c5103572c8fb3ecbae7b93d6b368514cd6572fd9c95be2410e5190"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.357244 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kdfkg" event={"ID":"e7e88a1b-3b19-46b8-b880-7b342640a8f2","Type":"ContainerStarted","Data":"2912f03d0ce1d80b6f90dbacc6bafdb8076c17cc6d73968e930f5032ec4ab6bb"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.364352 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tzkfc" event={"ID":"785a729e-1203-4d90-9bc1-447968cd6ffa","Type":"ContainerStarted","Data":"7f3694eb319752be38c240ba4a72b41640519eae7694b0571f0ee78694b2bc65"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.373108 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9a763079-28f4-4dd4-8ad8-96bc23a29fb8","Type":"ContainerStarted","Data":"d335e8cb8b8fec9361e221b93d7a075f03a9649508dfb7d06173478e1e88ae32"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.378211 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-kdfkg" podStartSLOduration=2.378187393 podStartE2EDuration="2.378187393s" podCreationTimestamp="2025-11-28 07:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:00.375197601 +0000 UTC m=+1263.345431756" watchObservedRunningTime="2025-11-28 07:09:00.378187393 +0000 UTC m=+1263.348421548" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.378457 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-548d6bf557-pbtfl" event={"ID":"8cff4827-368d-4e19-beb0-b22b71032f26","Type":"ContainerStarted","Data":"9d34ec502260920510047850ef0fe1d6c9af40106c243576c843ed178ee5cbea"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.379778 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" event={"ID":"c8952181-2de9-4a32-8c71-49e044d03333","Type":"ContainerStarted","Data":"e8f34ff13d5c11f5bc22734c42c4862572e4e5770f8ced9e5dc91a3b75f47586"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.382259 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bffcf884-2hbfs" event={"ID":"acdfb982-66e1-4791-b46a-e6c12765560d","Type":"ContainerDied","Data":"42b61ff5a5bfa71bb14c55d9a64b22e1f302454e6398a603c78647177cb97264"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.382578 4889 scope.go:117] "RemoveContainer" containerID="c0d210825e05d6b60ab162be789d8a9af6b29bc796f54dbf9281d496f2b52198" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.382886 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85bffcf884-2hbfs" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.403064 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"59b1be213e0c3af7ecbb85479735c5e364bee7085ba772a3db6c7ee269ef019c"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.403317 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6403-account-create-update-lm2b6"] Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.412016 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6s8jb" event={"ID":"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e","Type":"ContainerStarted","Data":"3681738ee6ffdc99631fc7592669b27cf1b5051d8fb5d42ca06423a423f3298e"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.412062 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6s8jb" event={"ID":"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e","Type":"ContainerStarted","Data":"540dd27266c3d525d020bedbe575914ea5a71aa6215b1c3f613bd86d67a5cfdc"} Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.436218 4889 scope.go:117] "RemoveContainer" containerID="dfa5003630dcf04869c1dc354bcd218515a456f185def3d88418656c8e6216cd" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.467845 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85bffcf884-2hbfs"] Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.476065 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85bffcf884-2hbfs"] Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.486661 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-6s8jb" podStartSLOduration=2.486639172 podStartE2EDuration="2.486639172s" podCreationTimestamp="2025-11-28 07:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:00.445553404 +0000 UTC m=+1263.415787559" watchObservedRunningTime="2025-11-28 07:09:00.486639172 +0000 UTC m=+1263.456873327" Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.515877 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.516128 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="ceilometer-central-agent" containerID="cri-o://ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc" gracePeriod=30 Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.516833 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="proxy-httpd" containerID="cri-o://983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e" gracePeriod=30 Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.516890 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="sg-core" containerID="cri-o://b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c" gracePeriod=30 Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.516921 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="ceilometer-notification-agent" containerID="cri-o://85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51" gracePeriod=30 Nov 28 07:09:00 crc kubenswrapper[4889]: I1128 07:09:00.524204 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.346147 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" path="/var/lib/kubelet/pods/acdfb982-66e1-4791-b46a-e6c12765560d/volumes" Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.421959 4889 generic.go:334] "Generic (PLEG): container finished" podID="c5f66c57-362a-437d-a4b4-e2c3bf045890" containerID="cc02df9dd41cbbd4e51054169827a03deb88d838c9bdef4962c059f73afe61b1" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.422041 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-670d-account-create-update-cxn84" event={"ID":"c5f66c57-362a-437d-a4b4-e2c3bf045890","Type":"ContainerDied","Data":"cc02df9dd41cbbd4e51054169827a03deb88d838c9bdef4962c059f73afe61b1"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.424360 4889 generic.go:334] "Generic (PLEG): container finished" podID="c8952181-2de9-4a32-8c71-49e044d03333" containerID="4357abd1870bf9b98ddbe5b9e7cf569546cc20dde733492fd058d85e4252fbcd" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.424415 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" event={"ID":"c8952181-2de9-4a32-8c71-49e044d03333","Type":"ContainerDied","Data":"4357abd1870bf9b98ddbe5b9e7cf569546cc20dde733492fd058d85e4252fbcd"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.425777 4889 generic.go:334] "Generic (PLEG): container finished" podID="e7e88a1b-3b19-46b8-b880-7b342640a8f2" containerID="75ed159e60c5103572c8fb3ecbae7b93d6b368514cd6572fd9c95be2410e5190" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.425831 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kdfkg" event={"ID":"e7e88a1b-3b19-46b8-b880-7b342640a8f2","Type":"ContainerDied","Data":"75ed159e60c5103572c8fb3ecbae7b93d6b368514cd6572fd9c95be2410e5190"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.429426 4889 generic.go:334] "Generic (PLEG): container finished" podID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerID="983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.429462 4889 generic.go:334] "Generic (PLEG): container finished" podID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerID="b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c" exitCode=2 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.429473 4889 generic.go:334] "Generic (PLEG): container finished" podID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerID="ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.429512 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerDied","Data":"983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.429530 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerDied","Data":"b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.429542 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerDied","Data":"ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.431002 4889 generic.go:334] "Generic (PLEG): container finished" podID="57fa70ad-b991-42d2-8d9c-53b9d19e6045" containerID="51667981de37a0e16560a734e0071aa71940bbf1daef40d427e6300ca8b6578b" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.431054 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6403-account-create-update-lm2b6" event={"ID":"57fa70ad-b991-42d2-8d9c-53b9d19e6045","Type":"ContainerDied","Data":"51667981de37a0e16560a734e0071aa71940bbf1daef40d427e6300ca8b6578b"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.431072 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6403-account-create-update-lm2b6" event={"ID":"57fa70ad-b991-42d2-8d9c-53b9d19e6045","Type":"ContainerStarted","Data":"0094e4acbe91de04000be2e26c9c69fa9fb27eb3cd9234c362140ac6132b6a42"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.466740 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6s8jb" event={"ID":"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e","Type":"ContainerDied","Data":"3681738ee6ffdc99631fc7592669b27cf1b5051d8fb5d42ca06423a423f3298e"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.466892 4889 generic.go:334] "Generic (PLEG): container finished" podID="2cf9f8ba-742e-483a-bbd9-9474dc0bb17e" containerID="3681738ee6ffdc99631fc7592669b27cf1b5051d8fb5d42ca06423a423f3298e" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.484577 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-548d6bf557-pbtfl" event={"ID":"8cff4827-368d-4e19-beb0-b22b71032f26","Type":"ContainerStarted","Data":"6dc7556254073930e346ad003426e246a1fe721ea68cbc74809582204ec3e3ad"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.484624 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-548d6bf557-pbtfl" event={"ID":"8cff4827-368d-4e19-beb0-b22b71032f26","Type":"ContainerStarted","Data":"def89232890ff2ea1170bf03b014fd49855e7baececf04474b47909d8032e453"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.484964 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.485147 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.493788 4889 generic.go:334] "Generic (PLEG): container finished" podID="785a729e-1203-4d90-9bc1-447968cd6ffa" containerID="d18408781f5d06767c9d4579d6c659e932641952a74fa7191a3d835fe6de724a" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.494048 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tzkfc" event={"ID":"785a729e-1203-4d90-9bc1-447968cd6ffa","Type":"ContainerDied","Data":"d18408781f5d06767c9d4579d6c659e932641952a74fa7191a3d835fe6de724a"} Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.553507 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-548d6bf557-pbtfl" podStartSLOduration=3.5534859450000003 podStartE2EDuration="3.553485945s" podCreationTimestamp="2025-11-28 07:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:01.546072917 +0000 UTC m=+1264.516307072" watchObservedRunningTime="2025-11-28 07:09:01.553485945 +0000 UTC m=+1264.523720100" Nov 28 07:09:01 crc kubenswrapper[4889]: I1128 07:09:01.739008 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 07:09:02 crc kubenswrapper[4889]: I1128 07:09:02.965053 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.046654 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gw62\" (UniqueName: \"kubernetes.io/projected/785a729e-1203-4d90-9bc1-447968cd6ffa-kube-api-access-2gw62\") pod \"785a729e-1203-4d90-9bc1-447968cd6ffa\" (UID: \"785a729e-1203-4d90-9bc1-447968cd6ffa\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.047094 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785a729e-1203-4d90-9bc1-447968cd6ffa-operator-scripts\") pod \"785a729e-1203-4d90-9bc1-447968cd6ffa\" (UID: \"785a729e-1203-4d90-9bc1-447968cd6ffa\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.048623 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/785a729e-1203-4d90-9bc1-447968cd6ffa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "785a729e-1203-4d90-9bc1-447968cd6ffa" (UID: "785a729e-1203-4d90-9bc1-447968cd6ffa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.057755 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785a729e-1203-4d90-9bc1-447968cd6ffa-kube-api-access-2gw62" (OuterVolumeSpecName: "kube-api-access-2gw62") pod "785a729e-1203-4d90-9bc1-447968cd6ffa" (UID: "785a729e-1203-4d90-9bc1-447968cd6ffa"). InnerVolumeSpecName "kube-api-access-2gw62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.149264 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gw62\" (UniqueName: \"kubernetes.io/projected/785a729e-1203-4d90-9bc1-447968cd6ffa-kube-api-access-2gw62\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.149306 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785a729e-1203-4d90-9bc1-447968cd6ffa-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.215808 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.223623 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.236182 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.239369 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.250414 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351162 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e88a1b-3b19-46b8-b880-7b342640a8f2-operator-scripts\") pod \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\" (UID: \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351362 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvvbn\" (UniqueName: \"kubernetes.io/projected/c8952181-2de9-4a32-8c71-49e044d03333-kube-api-access-xvvbn\") pod \"c8952181-2de9-4a32-8c71-49e044d03333\" (UID: \"c8952181-2de9-4a32-8c71-49e044d03333\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351449 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57fa70ad-b991-42d2-8d9c-53b9d19e6045-operator-scripts\") pod \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\" (UID: \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351588 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv9jj\" (UniqueName: \"kubernetes.io/projected/e7e88a1b-3b19-46b8-b880-7b342640a8f2-kube-api-access-gv9jj\") pod \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\" (UID: \"e7e88a1b-3b19-46b8-b880-7b342640a8f2\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351664 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f66c57-362a-437d-a4b4-e2c3bf045890-operator-scripts\") pod \"c5f66c57-362a-437d-a4b4-e2c3bf045890\" (UID: \"c5f66c57-362a-437d-a4b4-e2c3bf045890\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351787 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46hh\" (UniqueName: \"kubernetes.io/projected/57fa70ad-b991-42d2-8d9c-53b9d19e6045-kube-api-access-j46hh\") pod \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\" (UID: \"57fa70ad-b991-42d2-8d9c-53b9d19e6045\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351914 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-operator-scripts\") pod \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\" (UID: \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351992 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5q76\" (UniqueName: \"kubernetes.io/projected/c5f66c57-362a-437d-a4b4-e2c3bf045890-kube-api-access-l5q76\") pod \"c5f66c57-362a-437d-a4b4-e2c3bf045890\" (UID: \"c5f66c57-362a-437d-a4b4-e2c3bf045890\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.352068 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldxql\" (UniqueName: \"kubernetes.io/projected/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-kube-api-access-ldxql\") pod \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\" (UID: \"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.352156 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8952181-2de9-4a32-8c71-49e044d03333-operator-scripts\") pod \"c8952181-2de9-4a32-8c71-49e044d03333\" (UID: \"c8952181-2de9-4a32-8c71-49e044d03333\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.351780 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e88a1b-3b19-46b8-b880-7b342640a8f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7e88a1b-3b19-46b8-b880-7b342640a8f2" (UID: "e7e88a1b-3b19-46b8-b880-7b342640a8f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.352582 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57fa70ad-b991-42d2-8d9c-53b9d19e6045-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57fa70ad-b991-42d2-8d9c-53b9d19e6045" (UID: "57fa70ad-b991-42d2-8d9c-53b9d19e6045"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.352592 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e88a1b-3b19-46b8-b880-7b342640a8f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.352673 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cf9f8ba-742e-483a-bbd9-9474dc0bb17e" (UID: "2cf9f8ba-742e-483a-bbd9-9474dc0bb17e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.353298 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f66c57-362a-437d-a4b4-e2c3bf045890-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5f66c57-362a-437d-a4b4-e2c3bf045890" (UID: "c5f66c57-362a-437d-a4b4-e2c3bf045890"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.353503 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8952181-2de9-4a32-8c71-49e044d03333-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8952181-2de9-4a32-8c71-49e044d03333" (UID: "c8952181-2de9-4a32-8c71-49e044d03333"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.356460 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fa70ad-b991-42d2-8d9c-53b9d19e6045-kube-api-access-j46hh" (OuterVolumeSpecName: "kube-api-access-j46hh") pod "57fa70ad-b991-42d2-8d9c-53b9d19e6045" (UID: "57fa70ad-b991-42d2-8d9c-53b9d19e6045"). InnerVolumeSpecName "kube-api-access-j46hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.356943 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8952181-2de9-4a32-8c71-49e044d03333-kube-api-access-xvvbn" (OuterVolumeSpecName: "kube-api-access-xvvbn") pod "c8952181-2de9-4a32-8c71-49e044d03333" (UID: "c8952181-2de9-4a32-8c71-49e044d03333"). InnerVolumeSpecName "kube-api-access-xvvbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.361947 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-kube-api-access-ldxql" (OuterVolumeSpecName: "kube-api-access-ldxql") pod "2cf9f8ba-742e-483a-bbd9-9474dc0bb17e" (UID: "2cf9f8ba-742e-483a-bbd9-9474dc0bb17e"). InnerVolumeSpecName "kube-api-access-ldxql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.362932 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f66c57-362a-437d-a4b4-e2c3bf045890-kube-api-access-l5q76" (OuterVolumeSpecName: "kube-api-access-l5q76") pod "c5f66c57-362a-437d-a4b4-e2c3bf045890" (UID: "c5f66c57-362a-437d-a4b4-e2c3bf045890"). InnerVolumeSpecName "kube-api-access-l5q76". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.372250 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e88a1b-3b19-46b8-b880-7b342640a8f2-kube-api-access-gv9jj" (OuterVolumeSpecName: "kube-api-access-gv9jj") pod "e7e88a1b-3b19-46b8-b880-7b342640a8f2" (UID: "e7e88a1b-3b19-46b8-b880-7b342640a8f2"). InnerVolumeSpecName "kube-api-access-gv9jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.403634 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.455882 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvvbn\" (UniqueName: \"kubernetes.io/projected/c8952181-2de9-4a32-8c71-49e044d03333-kube-api-access-xvvbn\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.456999 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57fa70ad-b991-42d2-8d9c-53b9d19e6045-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.457042 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv9jj\" (UniqueName: \"kubernetes.io/projected/e7e88a1b-3b19-46b8-b880-7b342640a8f2-kube-api-access-gv9jj\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.457055 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f66c57-362a-437d-a4b4-e2c3bf045890-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.457068 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j46hh\" (UniqueName: \"kubernetes.io/projected/57fa70ad-b991-42d2-8d9c-53b9d19e6045-kube-api-access-j46hh\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.457083 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.457093 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5q76\" (UniqueName: \"kubernetes.io/projected/c5f66c57-362a-437d-a4b4-e2c3bf045890-kube-api-access-l5q76\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.457101 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldxql\" (UniqueName: \"kubernetes.io/projected/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e-kube-api-access-ldxql\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.457111 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8952181-2de9-4a32-8c71-49e044d03333-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.515672 4889 generic.go:334] "Generic (PLEG): container finished" podID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerID="85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51" exitCode=0 Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.515978 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.517125 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerDied","Data":"85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51"} Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.517308 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b0ced3-9aef-4c0e-bae5-44573e094a49","Type":"ContainerDied","Data":"9b2c5d2e815f2647d5e3dc5e79773d3fa23f2bd6792835ab17283585c2d8209d"} Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.517404 4889 scope.go:117] "RemoveContainer" containerID="983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.520727 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6403-account-create-update-lm2b6" event={"ID":"57fa70ad-b991-42d2-8d9c-53b9d19e6045","Type":"ContainerDied","Data":"0094e4acbe91de04000be2e26c9c69fa9fb27eb3cd9234c362140ac6132b6a42"} Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.520893 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0094e4acbe91de04000be2e26c9c69fa9fb27eb3cd9234c362140ac6132b6a42" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.521041 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6403-account-create-update-lm2b6" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.523623 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6s8jb" event={"ID":"2cf9f8ba-742e-483a-bbd9-9474dc0bb17e","Type":"ContainerDied","Data":"540dd27266c3d525d020bedbe575914ea5a71aa6215b1c3f613bd86d67a5cfdc"} Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.523649 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6s8jb" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.523664 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540dd27266c3d525d020bedbe575914ea5a71aa6215b1c3f613bd86d67a5cfdc" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.526447 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-670d-account-create-update-cxn84" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.526447 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-670d-account-create-update-cxn84" event={"ID":"c5f66c57-362a-437d-a4b4-e2c3bf045890","Type":"ContainerDied","Data":"0df631d42eda24204ea1f47eb9257329915ba38eac62aef2272b5b7c56a656a4"} Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.526783 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df631d42eda24204ea1f47eb9257329915ba38eac62aef2272b5b7c56a656a4" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.528873 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.528980 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3d10-account-create-update-r6pwt" event={"ID":"c8952181-2de9-4a32-8c71-49e044d03333","Type":"ContainerDied","Data":"e8f34ff13d5c11f5bc22734c42c4862572e4e5770f8ced9e5dc91a3b75f47586"} Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.529066 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f34ff13d5c11f5bc22734c42c4862572e4e5770f8ced9e5dc91a3b75f47586" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.531793 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kdfkg" event={"ID":"e7e88a1b-3b19-46b8-b880-7b342640a8f2","Type":"ContainerDied","Data":"2912f03d0ce1d80b6f90dbacc6bafdb8076c17cc6d73968e930f5032ec4ab6bb"} Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.531862 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2912f03d0ce1d80b6f90dbacc6bafdb8076c17cc6d73968e930f5032ec4ab6bb" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.532012 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kdfkg" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.533899 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tzkfc" event={"ID":"785a729e-1203-4d90-9bc1-447968cd6ffa","Type":"ContainerDied","Data":"7f3694eb319752be38c240ba4a72b41640519eae7694b0571f0ee78694b2bc65"} Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.533940 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f3694eb319752be38c240ba4a72b41640519eae7694b0571f0ee78694b2bc65" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.533979 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tzkfc" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.556417 4889 scope.go:117] "RemoveContainer" containerID="b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.558398 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-combined-ca-bundle\") pod \"05b0ced3-9aef-4c0e-bae5-44573e094a49\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.558437 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-run-httpd\") pod \"05b0ced3-9aef-4c0e-bae5-44573e094a49\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.558456 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-sg-core-conf-yaml\") pod \"05b0ced3-9aef-4c0e-bae5-44573e094a49\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.558501 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-scripts\") pod \"05b0ced3-9aef-4c0e-bae5-44573e094a49\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.558526 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-log-httpd\") pod \"05b0ced3-9aef-4c0e-bae5-44573e094a49\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.558571 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-config-data\") pod \"05b0ced3-9aef-4c0e-bae5-44573e094a49\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.558679 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnfbf\" (UniqueName: \"kubernetes.io/projected/05b0ced3-9aef-4c0e-bae5-44573e094a49-kube-api-access-fnfbf\") pod \"05b0ced3-9aef-4c0e-bae5-44573e094a49\" (UID: \"05b0ced3-9aef-4c0e-bae5-44573e094a49\") " Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.559238 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "05b0ced3-9aef-4c0e-bae5-44573e094a49" (UID: "05b0ced3-9aef-4c0e-bae5-44573e094a49"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.559949 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "05b0ced3-9aef-4c0e-bae5-44573e094a49" (UID: "05b0ced3-9aef-4c0e-bae5-44573e094a49"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.562792 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b0ced3-9aef-4c0e-bae5-44573e094a49-kube-api-access-fnfbf" (OuterVolumeSpecName: "kube-api-access-fnfbf") pod "05b0ced3-9aef-4c0e-bae5-44573e094a49" (UID: "05b0ced3-9aef-4c0e-bae5-44573e094a49"). InnerVolumeSpecName "kube-api-access-fnfbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.564987 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-scripts" (OuterVolumeSpecName: "scripts") pod "05b0ced3-9aef-4c0e-bae5-44573e094a49" (UID: "05b0ced3-9aef-4c0e-bae5-44573e094a49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.586945 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "05b0ced3-9aef-4c0e-bae5-44573e094a49" (UID: "05b0ced3-9aef-4c0e-bae5-44573e094a49"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.596347 4889 scope.go:117] "RemoveContainer" containerID="85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.633965 4889 scope.go:117] "RemoveContainer" containerID="ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.658041 4889 scope.go:117] "RemoveContainer" containerID="983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.659028 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e\": container with ID starting with 983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e not found: ID does not exist" containerID="983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.659063 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e"} err="failed to get container status \"983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e\": rpc error: code = NotFound desc = could not find container \"983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e\": container with ID starting with 983ea7bac24d4be22be8606dc7fa307b217c69595dbed04d2724d934ea28600e not found: ID does not exist" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.659089 4889 scope.go:117] "RemoveContainer" containerID="b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.659396 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c\": container with ID starting with b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c not found: ID does not exist" containerID="b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.659424 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c"} err="failed to get container status \"b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c\": rpc error: code = NotFound desc = could not find container \"b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c\": container with ID starting with b7a0f68d52a0d0735803a5d116f581d8c9743fc2f08bb6fc0f995de5b341f02c not found: ID does not exist" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.659440 4889 scope.go:117] "RemoveContainer" containerID="85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.660307 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51\": container with ID starting with 85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51 not found: ID does not exist" containerID="85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.660367 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51"} err="failed to get container status \"85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51\": rpc error: code = NotFound desc = could not find container \"85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51\": container with ID starting with 85f3e8320fb1335447ffe14356f12f8607a3d1ecd538ee939d5884bb2c683f51 not found: ID does not exist" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.660385 4889 scope.go:117] "RemoveContainer" containerID="ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.660454 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnfbf\" (UniqueName: \"kubernetes.io/projected/05b0ced3-9aef-4c0e-bae5-44573e094a49-kube-api-access-fnfbf\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.660472 4889 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.660486 4889 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.660498 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.660510 4889 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b0ced3-9aef-4c0e-bae5-44573e094a49-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.661119 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc\": container with ID starting with ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc not found: ID does not exist" containerID="ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.661146 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc"} err="failed to get container status \"ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc\": rpc error: code = NotFound desc = could not find container \"ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc\": container with ID starting with ed9da08b575b092ec0a216ac4e5caabdc1acf300c0b654b7bc09c8289a2004cc not found: ID does not exist" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.676016 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05b0ced3-9aef-4c0e-bae5-44573e094a49" (UID: "05b0ced3-9aef-4c0e-bae5-44573e094a49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.701941 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-config-data" (OuterVolumeSpecName: "config-data") pod "05b0ced3-9aef-4c0e-bae5-44573e094a49" (UID: "05b0ced3-9aef-4c0e-bae5-44573e094a49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.762069 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.762116 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0ced3-9aef-4c0e-bae5-44573e094a49-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.871845 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.906106 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.932547 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.933182 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf9f8ba-742e-483a-bbd9-9474dc0bb17e" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.933272 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf9f8ba-742e-483a-bbd9-9474dc0bb17e" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.933357 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" containerName="neutron-api" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.933429 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" containerName="neutron-api" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.933537 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f66c57-362a-437d-a4b4-e2c3bf045890" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.933608 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f66c57-362a-437d-a4b4-e2c3bf045890" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.933685 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="sg-core" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.933788 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="sg-core" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.933865 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fa70ad-b991-42d2-8d9c-53b9d19e6045" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.933932 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fa70ad-b991-42d2-8d9c-53b9d19e6045" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.934023 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e88a1b-3b19-46b8-b880-7b342640a8f2" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.934094 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e88a1b-3b19-46b8-b880-7b342640a8f2" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.934168 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" containerName="neutron-httpd" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.934220 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" containerName="neutron-httpd" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.934281 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="ceilometer-notification-agent" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.934329 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="ceilometer-notification-agent" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.934392 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="ceilometer-central-agent" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.934442 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="ceilometer-central-agent" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.934492 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785a729e-1203-4d90-9bc1-447968cd6ffa" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.934545 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="785a729e-1203-4d90-9bc1-447968cd6ffa" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.934596 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8952181-2de9-4a32-8c71-49e044d03333" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.934652 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8952181-2de9-4a32-8c71-49e044d03333" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: E1128 07:09:03.934749 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="proxy-httpd" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.934841 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="proxy-httpd" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.935504 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fa70ad-b991-42d2-8d9c-53b9d19e6045" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.935583 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="proxy-httpd" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.935644 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f66c57-362a-437d-a4b4-e2c3bf045890" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.935699 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" containerName="neutron-httpd" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.935791 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="ceilometer-notification-agent" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.936128 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdfb982-66e1-4791-b46a-e6c12765560d" containerName="neutron-api" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.936214 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="ceilometer-central-agent" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.936289 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8952181-2de9-4a32-8c71-49e044d03333" containerName="mariadb-account-create-update" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.936373 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="785a729e-1203-4d90-9bc1-447968cd6ffa" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.936449 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf9f8ba-742e-483a-bbd9-9474dc0bb17e" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.936546 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e88a1b-3b19-46b8-b880-7b342640a8f2" containerName="mariadb-database-create" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.936639 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" containerName="sg-core" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.938786 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.942064 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.942064 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:09:03 crc kubenswrapper[4889]: I1128 07:09:03.942911 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.070563 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-config-data\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.070818 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.070873 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.070895 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.071018 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrxz\" (UniqueName: \"kubernetes.io/projected/04836c41-1c6c-498e-a820-4f87f0157aa7-kube-api-access-5vrxz\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.071093 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.071157 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-scripts\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.173303 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.173371 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-scripts\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.173427 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-config-data\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.173468 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.173491 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.173513 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.173583 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vrxz\" (UniqueName: \"kubernetes.io/projected/04836c41-1c6c-498e-a820-4f87f0157aa7-kube-api-access-5vrxz\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.174734 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.175386 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.178592 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-scripts\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.178621 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.178779 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.189137 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-config-data\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.192539 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vrxz\" (UniqueName: \"kubernetes.io/projected/04836c41-1c6c-498e-a820-4f87f0157aa7-kube-api-access-5vrxz\") pod \"ceilometer-0\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.258620 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:04 crc kubenswrapper[4889]: I1128 07:09:04.728366 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:04 crc kubenswrapper[4889]: W1128 07:09:04.736987 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04836c41_1c6c_498e_a820_4f87f0157aa7.slice/crio-0e39fe79fb9585a8b83c31cc0c65c9dd5f1a9a2258236d044582fd2da7297746 WatchSource:0}: Error finding container 0e39fe79fb9585a8b83c31cc0c65c9dd5f1a9a2258236d044582fd2da7297746: Status 404 returned error can't find the container with id 0e39fe79fb9585a8b83c31cc0c65c9dd5f1a9a2258236d044582fd2da7297746 Nov 28 07:09:05 crc kubenswrapper[4889]: I1128 07:09:05.345506 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b0ced3-9aef-4c0e-bae5-44573e094a49" path="/var/lib/kubelet/pods/05b0ced3-9aef-4c0e-bae5-44573e094a49/volumes" Nov 28 07:09:05 crc kubenswrapper[4889]: I1128 07:09:05.555757 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerStarted","Data":"0e39fe79fb9585a8b83c31cc0c65c9dd5f1a9a2258236d044582fd2da7297746"} Nov 28 07:09:07 crc kubenswrapper[4889]: I1128 07:09:07.300370 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.084357 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.091170 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.282645 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd5js"] Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.284033 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.289179 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.289363 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mbg85" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.289463 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.296652 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd5js"] Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.377535 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-scripts\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.378075 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.378107 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.378135 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqmt\" (UniqueName: \"kubernetes.io/projected/a31734ad-17b6-497c-a83b-3e960ff9291c-kube-api-access-bbqmt\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.479952 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.479995 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.480027 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqmt\" (UniqueName: \"kubernetes.io/projected/a31734ad-17b6-497c-a83b-3e960ff9291c-kube-api-access-bbqmt\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.480083 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-scripts\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.485378 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-scripts\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.494413 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.499558 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.500003 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqmt\" (UniqueName: \"kubernetes.io/projected/a31734ad-17b6-497c-a83b-3e960ff9291c-kube-api-access-bbqmt\") pod \"nova-cell0-conductor-db-sync-sd5js\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:09 crc kubenswrapper[4889]: I1128 07:09:09.606461 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:11 crc kubenswrapper[4889]: I1128 07:09:11.444214 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd5js"] Nov 28 07:09:11 crc kubenswrapper[4889]: I1128 07:09:11.614062 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerStarted","Data":"bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35"} Nov 28 07:09:11 crc kubenswrapper[4889]: I1128 07:09:11.614973 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sd5js" event={"ID":"a31734ad-17b6-497c-a83b-3e960ff9291c","Type":"ContainerStarted","Data":"24517b8eabd90dc6b806690435e1fd01e424055eb5ca9a84e50efb6a9c53ed6e"} Nov 28 07:09:11 crc kubenswrapper[4889]: I1128 07:09:11.616108 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9a763079-28f4-4dd4-8ad8-96bc23a29fb8","Type":"ContainerStarted","Data":"3c5f80a48b3ca25b6b7f1a49e74a9bac715cb48a9bb541e28b85cc92405a168e"} Nov 28 07:09:11 crc kubenswrapper[4889]: I1128 07:09:11.635488 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.815363782 podStartE2EDuration="13.635470313s" podCreationTimestamp="2025-11-28 07:08:58 +0000 UTC" firstStartedPulling="2025-11-28 07:09:00.170639248 +0000 UTC m=+1263.140873403" lastFinishedPulling="2025-11-28 07:09:10.990745779 +0000 UTC m=+1273.960979934" observedRunningTime="2025-11-28 07:09:11.630876203 +0000 UTC m=+1274.601110368" watchObservedRunningTime="2025-11-28 07:09:11.635470313 +0000 UTC m=+1274.605704468" Nov 28 07:09:13 crc kubenswrapper[4889]: I1128 07:09:13.648769 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerStarted","Data":"ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf"} Nov 28 07:09:14 crc kubenswrapper[4889]: I1128 07:09:14.668457 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerStarted","Data":"ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5"} Nov 28 07:09:20 crc kubenswrapper[4889]: I1128 07:09:20.562722 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:09:20 crc kubenswrapper[4889]: I1128 07:09:20.565396 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerName="glance-log" containerID="cri-o://e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9" gracePeriod=30 Nov 28 07:09:20 crc kubenswrapper[4889]: I1128 07:09:20.565971 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerName="glance-httpd" containerID="cri-o://8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0" gracePeriod=30 Nov 28 07:09:20 crc kubenswrapper[4889]: I1128 07:09:20.742784 4889 generic.go:334] "Generic (PLEG): container finished" podID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerID="e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9" exitCode=143 Nov 28 07:09:20 crc kubenswrapper[4889]: I1128 07:09:20.742899 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab27c833-5fb3-45dd-8bea-5abf637db41a","Type":"ContainerDied","Data":"e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9"} Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.753662 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerStarted","Data":"fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea"} Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.753907 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="sg-core" containerID="cri-o://ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5" gracePeriod=30 Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.753968 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="ceilometer-notification-agent" containerID="cri-o://ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf" gracePeriod=30 Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.754045 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.753866 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="proxy-httpd" containerID="cri-o://fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea" gracePeriod=30 Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.753845 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="ceilometer-central-agent" containerID="cri-o://bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35" gracePeriod=30 Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.756523 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sd5js" event={"ID":"a31734ad-17b6-497c-a83b-3e960ff9291c","Type":"ContainerStarted","Data":"626370a08a887e0cf879525f92156e1e49617b4c81dc57238c397d3e1a16d956"} Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.781771 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.912698058 podStartE2EDuration="18.78174943s" podCreationTimestamp="2025-11-28 07:09:03 +0000 UTC" firstStartedPulling="2025-11-28 07:09:04.740474839 +0000 UTC m=+1267.710708994" lastFinishedPulling="2025-11-28 07:09:20.609526211 +0000 UTC m=+1283.579760366" observedRunningTime="2025-11-28 07:09:21.77591835 +0000 UTC m=+1284.746152505" watchObservedRunningTime="2025-11-28 07:09:21.78174943 +0000 UTC m=+1284.751983585" Nov 28 07:09:21 crc kubenswrapper[4889]: I1128 07:09:21.792127 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-sd5js" podStartSLOduration=3.634015763 podStartE2EDuration="12.79211115s" podCreationTimestamp="2025-11-28 07:09:09 +0000 UTC" firstStartedPulling="2025-11-28 07:09:11.45001477 +0000 UTC m=+1274.420248925" lastFinishedPulling="2025-11-28 07:09:20.608110157 +0000 UTC m=+1283.578344312" observedRunningTime="2025-11-28 07:09:21.790569382 +0000 UTC m=+1284.760803547" watchObservedRunningTime="2025-11-28 07:09:21.79211115 +0000 UTC m=+1284.762345305" Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.004676 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.005402 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerName="glance-log" containerID="cri-o://7ffdbe6958c1311ee48cd241ca1d53677f0cc72b4499994d7a16263eff3b6487" gracePeriod=30 Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.005650 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerName="glance-httpd" containerID="cri-o://85b12a258d7e9a073bbd73f28c81297df92852331940f5dbfda2d3b0a81b900f" gracePeriod=30 Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.778431 4889 generic.go:334] "Generic (PLEG): container finished" podID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerID="fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea" exitCode=0 Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.778823 4889 generic.go:334] "Generic (PLEG): container finished" podID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerID="ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5" exitCode=2 Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.778482 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerDied","Data":"fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea"} Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.778878 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerDied","Data":"ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5"} Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.778895 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerDied","Data":"bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35"} Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.778840 4889 generic.go:334] "Generic (PLEG): container finished" podID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerID="bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35" exitCode=0 Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.782375 4889 generic.go:334] "Generic (PLEG): container finished" podID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerID="7ffdbe6958c1311ee48cd241ca1d53677f0cc72b4499994d7a16263eff3b6487" exitCode=143 Nov 28 07:09:22 crc kubenswrapper[4889]: I1128 07:09:22.782465 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a","Type":"ContainerDied","Data":"7ffdbe6958c1311ee48cd241ca1d53677f0cc72b4499994d7a16263eff3b6487"} Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.251077 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.379284 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-sg-core-conf-yaml\") pod \"04836c41-1c6c-498e-a820-4f87f0157aa7\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.379568 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-scripts\") pod \"04836c41-1c6c-498e-a820-4f87f0157aa7\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.379592 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-log-httpd\") pod \"04836c41-1c6c-498e-a820-4f87f0157aa7\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.379633 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vrxz\" (UniqueName: \"kubernetes.io/projected/04836c41-1c6c-498e-a820-4f87f0157aa7-kube-api-access-5vrxz\") pod \"04836c41-1c6c-498e-a820-4f87f0157aa7\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.379663 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-combined-ca-bundle\") pod \"04836c41-1c6c-498e-a820-4f87f0157aa7\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.379694 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-run-httpd\") pod \"04836c41-1c6c-498e-a820-4f87f0157aa7\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.379727 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-config-data\") pod \"04836c41-1c6c-498e-a820-4f87f0157aa7\" (UID: \"04836c41-1c6c-498e-a820-4f87f0157aa7\") " Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.380879 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04836c41-1c6c-498e-a820-4f87f0157aa7" (UID: "04836c41-1c6c-498e-a820-4f87f0157aa7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.381005 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04836c41-1c6c-498e-a820-4f87f0157aa7" (UID: "04836c41-1c6c-498e-a820-4f87f0157aa7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.385029 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04836c41-1c6c-498e-a820-4f87f0157aa7-kube-api-access-5vrxz" (OuterVolumeSpecName: "kube-api-access-5vrxz") pod "04836c41-1c6c-498e-a820-4f87f0157aa7" (UID: "04836c41-1c6c-498e-a820-4f87f0157aa7"). InnerVolumeSpecName "kube-api-access-5vrxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.386767 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-scripts" (OuterVolumeSpecName: "scripts") pod "04836c41-1c6c-498e-a820-4f87f0157aa7" (UID: "04836c41-1c6c-498e-a820-4f87f0157aa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.411874 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04836c41-1c6c-498e-a820-4f87f0157aa7" (UID: "04836c41-1c6c-498e-a820-4f87f0157aa7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.481301 4889 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.481334 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.481344 4889 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.481352 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vrxz\" (UniqueName: \"kubernetes.io/projected/04836c41-1c6c-498e-a820-4f87f0157aa7-kube-api-access-5vrxz\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.481363 4889 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04836c41-1c6c-498e-a820-4f87f0157aa7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.487032 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04836c41-1c6c-498e-a820-4f87f0157aa7" (UID: "04836c41-1c6c-498e-a820-4f87f0157aa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.492361 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-config-data" (OuterVolumeSpecName: "config-data") pod "04836c41-1c6c-498e-a820-4f87f0157aa7" (UID: "04836c41-1c6c-498e-a820-4f87f0157aa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.582665 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.582912 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04836c41-1c6c-498e-a820-4f87f0157aa7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.794301 4889 generic.go:334] "Generic (PLEG): container finished" podID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerID="ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf" exitCode=0 Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.794343 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerDied","Data":"ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf"} Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.794371 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04836c41-1c6c-498e-a820-4f87f0157aa7","Type":"ContainerDied","Data":"0e39fe79fb9585a8b83c31cc0c65c9dd5f1a9a2258236d044582fd2da7297746"} Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.794401 4889 scope.go:117] "RemoveContainer" containerID="fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.794565 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.825130 4889 scope.go:117] "RemoveContainer" containerID="ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.855751 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.873830 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.883357 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:23 crc kubenswrapper[4889]: E1128 07:09:23.883979 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="ceilometer-central-agent" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.884001 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="ceilometer-central-agent" Nov 28 07:09:23 crc kubenswrapper[4889]: E1128 07:09:23.884032 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="ceilometer-notification-agent" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.884040 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="ceilometer-notification-agent" Nov 28 07:09:23 crc kubenswrapper[4889]: E1128 07:09:23.884058 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="sg-core" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.884065 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="sg-core" Nov 28 07:09:23 crc kubenswrapper[4889]: E1128 07:09:23.884072 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="proxy-httpd" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.884080 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="proxy-httpd" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.884288 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="proxy-httpd" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.884314 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="sg-core" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.884329 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="ceilometer-notification-agent" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.884341 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" containerName="ceilometer-central-agent" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.891945 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.900999 4889 scope.go:117] "RemoveContainer" containerID="ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.901077 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.901878 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.904089 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:09:23 crc kubenswrapper[4889]: I1128 07:09:23.981858 4889 scope.go:117] "RemoveContainer" containerID="bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.007025 4889 scope.go:117] "RemoveContainer" containerID="fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea" Nov 28 07:09:24 crc kubenswrapper[4889]: E1128 07:09:24.007962 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea\": container with ID starting with fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea not found: ID does not exist" containerID="fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.008015 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea"} err="failed to get container status \"fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea\": rpc error: code = NotFound desc = could not find container \"fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea\": container with ID starting with fa2d0ebc7c198206f89cfb31da66a3cab7142e803df617eae831d856453bc1ea not found: ID does not exist" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.008056 4889 scope.go:117] "RemoveContainer" containerID="ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5" Nov 28 07:09:24 crc kubenswrapper[4889]: E1128 07:09:24.008582 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5\": container with ID starting with ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5 not found: ID does not exist" containerID="ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.008684 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5"} err="failed to get container status \"ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5\": rpc error: code = NotFound desc = could not find container \"ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5\": container with ID starting with ea61c69fca7cfe50d51d68f1aeb07dbe468075ce38b62c38e9af294d050fe8f5 not found: ID does not exist" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.008788 4889 scope.go:117] "RemoveContainer" containerID="ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf" Nov 28 07:09:24 crc kubenswrapper[4889]: E1128 07:09:24.009123 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf\": container with ID starting with ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf not found: ID does not exist" containerID="ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.009170 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf"} err="failed to get container status \"ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf\": rpc error: code = NotFound desc = could not find container \"ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf\": container with ID starting with ef92cc51f22d9a75effb96b37c998245975b73e395374632c915b2c0836b20bf not found: ID does not exist" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.009197 4889 scope.go:117] "RemoveContainer" containerID="bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35" Nov 28 07:09:24 crc kubenswrapper[4889]: E1128 07:09:24.009585 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35\": container with ID starting with bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35 not found: ID does not exist" containerID="bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.009733 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35"} err="failed to get container status \"bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35\": rpc error: code = NotFound desc = could not find container \"bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35\": container with ID starting with bd84004343bd5cfe47519de854613a743769c66bfaf31e731aafe24c605ecf35 not found: ID does not exist" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.094038 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.094108 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-run-httpd\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.094409 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-scripts\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.094476 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.094592 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-config-data\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.094618 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-log-httpd\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.094651 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxh8\" (UniqueName: \"kubernetes.io/projected/f9b18624-0a09-4827-b69a-f3831bf83f06-kube-api-access-cvxh8\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.196049 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxh8\" (UniqueName: \"kubernetes.io/projected/f9b18624-0a09-4827-b69a-f3831bf83f06-kube-api-access-cvxh8\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.196167 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.196191 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-run-httpd\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.196246 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-scripts\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.196265 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.196297 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-config-data\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.196312 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-log-httpd\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.199397 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-run-httpd\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.199415 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-log-httpd\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.201093 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.205492 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-scripts\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.207473 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-config-data\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.214686 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.216023 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxh8\" (UniqueName: \"kubernetes.io/projected/f9b18624-0a09-4827-b69a-f3831bf83f06-kube-api-access-cvxh8\") pod \"ceilometer-0\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.255093 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.351753 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.399598 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ab27c833-5fb3-45dd-8bea-5abf637db41a\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.399957 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-scripts\") pod \"ab27c833-5fb3-45dd-8bea-5abf637db41a\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.400001 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-config-data\") pod \"ab27c833-5fb3-45dd-8bea-5abf637db41a\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.400027 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-combined-ca-bundle\") pod \"ab27c833-5fb3-45dd-8bea-5abf637db41a\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.400055 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-httpd-run\") pod \"ab27c833-5fb3-45dd-8bea-5abf637db41a\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.400089 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt672\" (UniqueName: \"kubernetes.io/projected/ab27c833-5fb3-45dd-8bea-5abf637db41a-kube-api-access-dt672\") pod \"ab27c833-5fb3-45dd-8bea-5abf637db41a\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.400124 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-public-tls-certs\") pod \"ab27c833-5fb3-45dd-8bea-5abf637db41a\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.400167 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-logs\") pod \"ab27c833-5fb3-45dd-8bea-5abf637db41a\" (UID: \"ab27c833-5fb3-45dd-8bea-5abf637db41a\") " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.400776 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-logs" (OuterVolumeSpecName: "logs") pod "ab27c833-5fb3-45dd-8bea-5abf637db41a" (UID: "ab27c833-5fb3-45dd-8bea-5abf637db41a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.401059 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ab27c833-5fb3-45dd-8bea-5abf637db41a" (UID: "ab27c833-5fb3-45dd-8bea-5abf637db41a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.405485 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-scripts" (OuterVolumeSpecName: "scripts") pod "ab27c833-5fb3-45dd-8bea-5abf637db41a" (UID: "ab27c833-5fb3-45dd-8bea-5abf637db41a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.406914 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "ab27c833-5fb3-45dd-8bea-5abf637db41a" (UID: "ab27c833-5fb3-45dd-8bea-5abf637db41a"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.412947 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab27c833-5fb3-45dd-8bea-5abf637db41a-kube-api-access-dt672" (OuterVolumeSpecName: "kube-api-access-dt672") pod "ab27c833-5fb3-45dd-8bea-5abf637db41a" (UID: "ab27c833-5fb3-45dd-8bea-5abf637db41a"). InnerVolumeSpecName "kube-api-access-dt672". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.438150 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab27c833-5fb3-45dd-8bea-5abf637db41a" (UID: "ab27c833-5fb3-45dd-8bea-5abf637db41a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.461533 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-config-data" (OuterVolumeSpecName: "config-data") pod "ab27c833-5fb3-45dd-8bea-5abf637db41a" (UID: "ab27c833-5fb3-45dd-8bea-5abf637db41a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.472163 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ab27c833-5fb3-45dd-8bea-5abf637db41a" (UID: "ab27c833-5fb3-45dd-8bea-5abf637db41a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.503150 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.503183 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.503218 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.503227 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.503235 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.503244 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab27c833-5fb3-45dd-8bea-5abf637db41a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.503252 4889 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab27c833-5fb3-45dd-8bea-5abf637db41a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.503260 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt672\" (UniqueName: \"kubernetes.io/projected/ab27c833-5fb3-45dd-8bea-5abf637db41a-kube-api-access-dt672\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.535059 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.604400 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.754252 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:24 crc kubenswrapper[4889]: W1128 07:09:24.764316 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b18624_0a09_4827_b69a_f3831bf83f06.slice/crio-fc3411a1f4a53a63a5bd36c164af8d92a746e9f5923ad410dde4647a049c8d49 WatchSource:0}: Error finding container fc3411a1f4a53a63a5bd36c164af8d92a746e9f5923ad410dde4647a049c8d49: Status 404 returned error can't find the container with id fc3411a1f4a53a63a5bd36c164af8d92a746e9f5923ad410dde4647a049c8d49 Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.803131 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerStarted","Data":"fc3411a1f4a53a63a5bd36c164af8d92a746e9f5923ad410dde4647a049c8d49"} Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.806534 4889 generic.go:334] "Generic (PLEG): container finished" podID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerID="8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0" exitCode=0 Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.806634 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab27c833-5fb3-45dd-8bea-5abf637db41a","Type":"ContainerDied","Data":"8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0"} Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.806676 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab27c833-5fb3-45dd-8bea-5abf637db41a","Type":"ContainerDied","Data":"6d2887eb3793ba8f584c237dc914a74c049157a3f4c3e152aaee30643ce2827c"} Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.806635 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.806729 4889 scope.go:117] "RemoveContainer" containerID="8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.847427 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.864427 4889 scope.go:117] "RemoveContainer" containerID="e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.876686 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.883473 4889 scope.go:117] "RemoveContainer" containerID="8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0" Nov 28 07:09:24 crc kubenswrapper[4889]: E1128 07:09:24.883883 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0\": container with ID starting with 8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0 not found: ID does not exist" containerID="8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.883915 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0"} err="failed to get container status \"8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0\": rpc error: code = NotFound desc = could not find container \"8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0\": container with ID starting with 8790ee5cd7c68c864b8ecdced2fd2638613c20fa82ea5e061435dff313b3eaa0 not found: ID does not exist" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.883941 4889 scope.go:117] "RemoveContainer" containerID="e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9" Nov 28 07:09:24 crc kubenswrapper[4889]: E1128 07:09:24.884179 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9\": container with ID starting with e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9 not found: ID does not exist" containerID="e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.884208 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9"} err="failed to get container status \"e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9\": rpc error: code = NotFound desc = could not find container \"e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9\": container with ID starting with e6631b2cd269353873b4ace3c02f7ceec9937a638541b4c903ff017be74e4ea9 not found: ID does not exist" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.886839 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:09:24 crc kubenswrapper[4889]: E1128 07:09:24.887263 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerName="glance-log" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.887285 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerName="glance-log" Nov 28 07:09:24 crc kubenswrapper[4889]: E1128 07:09:24.887304 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerName="glance-httpd" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.887310 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerName="glance-httpd" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.887481 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerName="glance-log" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.887496 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" containerName="glance-httpd" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.888428 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.892459 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.894738 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 07:09:24 crc kubenswrapper[4889]: I1128 07:09:24.905777 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.016714 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.016822 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.016847 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnlcd\" (UniqueName: \"kubernetes.io/projected/30ed215c-b8d0-43fb-85bd-8531e5acf609-kube-api-access-cnlcd\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.016891 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.016942 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.016967 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-logs\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.017011 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.017045 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.118979 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119026 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnlcd\" (UniqueName: \"kubernetes.io/projected/30ed215c-b8d0-43fb-85bd-8531e5acf609-kube-api-access-cnlcd\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119044 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119075 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119125 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119151 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-logs\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119191 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119213 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119971 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.120091 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.119985 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-logs\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.123421 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.123696 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.123733 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.131254 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.153265 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.157525 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnlcd\" (UniqueName: \"kubernetes.io/projected/30ed215c-b8d0-43fb-85bd-8531e5acf609-kube-api-access-cnlcd\") pod \"glance-default-external-api-0\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.270667 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.343291 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04836c41-1c6c-498e-a820-4f87f0157aa7" path="/var/lib/kubelet/pods/04836c41-1c6c-498e-a820-4f87f0157aa7/volumes" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.344090 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab27c833-5fb3-45dd-8bea-5abf637db41a" path="/var/lib/kubelet/pods/ab27c833-5fb3-45dd-8bea-5abf637db41a/volumes" Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.771108 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:09:25 crc kubenswrapper[4889]: W1128 07:09:25.775241 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30ed215c_b8d0_43fb_85bd_8531e5acf609.slice/crio-f034b372d68ec7a99b3a4374bb66584680c94e8d9453e90d16b2255971e39203 WatchSource:0}: Error finding container f034b372d68ec7a99b3a4374bb66584680c94e8d9453e90d16b2255971e39203: Status 404 returned error can't find the container with id f034b372d68ec7a99b3a4374bb66584680c94e8d9453e90d16b2255971e39203 Nov 28 07:09:25 crc kubenswrapper[4889]: I1128 07:09:25.829413 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ed215c-b8d0-43fb-85bd-8531e5acf609","Type":"ContainerStarted","Data":"f034b372d68ec7a99b3a4374bb66584680c94e8d9453e90d16b2255971e39203"} Nov 28 07:09:26 crc kubenswrapper[4889]: I1128 07:09:26.847179 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerStarted","Data":"ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed"} Nov 28 07:09:26 crc kubenswrapper[4889]: I1128 07:09:26.855922 4889 generic.go:334] "Generic (PLEG): container finished" podID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerID="85b12a258d7e9a073bbd73f28c81297df92852331940f5dbfda2d3b0a81b900f" exitCode=0 Nov 28 07:09:26 crc kubenswrapper[4889]: I1128 07:09:26.856015 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a","Type":"ContainerDied","Data":"85b12a258d7e9a073bbd73f28c81297df92852331940f5dbfda2d3b0a81b900f"} Nov 28 07:09:26 crc kubenswrapper[4889]: I1128 07:09:26.864265 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ed215c-b8d0-43fb-85bd-8531e5acf609","Type":"ContainerStarted","Data":"409c5ef01d2ff33efa004111267e8e87bbe31d48936823d35c2588b49a2b67eb"} Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.126119 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.256418 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-internal-tls-certs\") pod \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.256487 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd6cw\" (UniqueName: \"kubernetes.io/projected/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-kube-api-access-vd6cw\") pod \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.256522 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-config-data\") pod \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.256563 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-combined-ca-bundle\") pod \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.256593 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-httpd-run\") pod \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.256615 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-logs\") pod \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.256660 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.256693 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-scripts\") pod \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\" (UID: \"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a\") " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.259225 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" (UID: "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.259565 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-logs" (OuterVolumeSpecName: "logs") pod "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" (UID: "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.262238 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-scripts" (OuterVolumeSpecName: "scripts") pod "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" (UID: "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.265334 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-kube-api-access-vd6cw" (OuterVolumeSpecName: "kube-api-access-vd6cw") pod "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" (UID: "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a"). InnerVolumeSpecName "kube-api-access-vd6cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.265852 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" (UID: "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.290806 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" (UID: "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.327965 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" (UID: "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.329914 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-config-data" (OuterVolumeSpecName: "config-data") pod "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" (UID: "5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.358567 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.358787 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.358804 4889 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.358813 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.358841 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.358851 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.358860 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.358868 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd6cw\" (UniqueName: \"kubernetes.io/projected/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a-kube-api-access-vd6cw\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.380833 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.460389 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.892674 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerStarted","Data":"b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d"} Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.902115 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a","Type":"ContainerDied","Data":"2c0de69bdd0f1b11807ebfc3ce4ebf809b4136164f78946ad3b0a1e8f273c450"} Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.902187 4889 scope.go:117] "RemoveContainer" containerID="85b12a258d7e9a073bbd73f28c81297df92852331940f5dbfda2d3b0a81b900f" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.902213 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.909368 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ed215c-b8d0-43fb-85bd-8531e5acf609","Type":"ContainerStarted","Data":"49402cf027d11e8e350b29757338f93c7461291da6a3603e125d8fc9821c3652"} Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.940462 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.940445656 podStartE2EDuration="3.940445656s" podCreationTimestamp="2025-11-28 07:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:27.926932841 +0000 UTC m=+1290.897166996" watchObservedRunningTime="2025-11-28 07:09:27.940445656 +0000 UTC m=+1290.910679811" Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.985473 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:09:27 crc kubenswrapper[4889]: I1128 07:09:27.995499 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.011116 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:09:28 crc kubenswrapper[4889]: E1128 07:09:28.011597 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerName="glance-httpd" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.011622 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerName="glance-httpd" Nov 28 07:09:28 crc kubenswrapper[4889]: E1128 07:09:28.011639 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerName="glance-log" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.011648 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerName="glance-log" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.011929 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerName="glance-log" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.011955 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" containerName="glance-httpd" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.013068 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.015210 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.015391 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.031344 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.068064 4889 scope.go:117] "RemoveContainer" containerID="7ffdbe6958c1311ee48cd241ca1d53677f0cc72b4499994d7a16263eff3b6487" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.082072 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.082119 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.082386 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.082427 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.082519 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tt7\" (UniqueName: \"kubernetes.io/projected/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-kube-api-access-p5tt7\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.082554 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.082572 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.082604 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184309 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184380 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184414 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184442 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184525 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tt7\" (UniqueName: \"kubernetes.io/projected/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-kube-api-access-p5tt7\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184556 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184574 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184608 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.184849 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.185481 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.185718 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.198800 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.201571 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.208447 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.208904 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.214419 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tt7\" (UniqueName: \"kubernetes.io/projected/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-kube-api-access-p5tt7\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.220909 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.335033 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.631466 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:28 crc kubenswrapper[4889]: W1128 07:09:28.884069 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb1e21ee_7d2d_4d55_8a0e_d6235a12f0ae.slice/crio-3b49c28367a3d160df8ad542ed4d61e0381a18416a147f8382550cc5b290a67e WatchSource:0}: Error finding container 3b49c28367a3d160df8ad542ed4d61e0381a18416a147f8382550cc5b290a67e: Status 404 returned error can't find the container with id 3b49c28367a3d160df8ad542ed4d61e0381a18416a147f8382550cc5b290a67e Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.884805 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.918425 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerStarted","Data":"0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24"} Nov 28 07:09:28 crc kubenswrapper[4889]: I1128 07:09:28.922475 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae","Type":"ContainerStarted","Data":"3b49c28367a3d160df8ad542ed4d61e0381a18416a147f8382550cc5b290a67e"} Nov 28 07:09:29 crc kubenswrapper[4889]: I1128 07:09:29.341434 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a" path="/var/lib/kubelet/pods/5f3f1691-9f9f-4d9a-a6c3-bdca58545d2a/volumes" Nov 28 07:09:29 crc kubenswrapper[4889]: I1128 07:09:29.938366 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae","Type":"ContainerStarted","Data":"22318eb16b34523322d3a94ac17704c1b438f84bf7f28f3ecaa09dfd78e54966"} Nov 28 07:09:30 crc kubenswrapper[4889]: I1128 07:09:30.948679 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="ceilometer-central-agent" containerID="cri-o://ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed" gracePeriod=30 Nov 28 07:09:30 crc kubenswrapper[4889]: I1128 07:09:30.948803 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="proxy-httpd" containerID="cri-o://2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784" gracePeriod=30 Nov 28 07:09:30 crc kubenswrapper[4889]: I1128 07:09:30.948839 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="ceilometer-notification-agent" containerID="cri-o://b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d" gracePeriod=30 Nov 28 07:09:30 crc kubenswrapper[4889]: I1128 07:09:30.948895 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="sg-core" containerID="cri-o://0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24" gracePeriod=30 Nov 28 07:09:30 crc kubenswrapper[4889]: I1128 07:09:30.948972 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerStarted","Data":"2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784"} Nov 28 07:09:30 crc kubenswrapper[4889]: I1128 07:09:30.949523 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:09:30 crc kubenswrapper[4889]: I1128 07:09:30.960595 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae","Type":"ContainerStarted","Data":"cff416d0a45fbb92ec6800489afd9ccbad8dbac624f5bfcda44035e9258fc559"} Nov 28 07:09:30 crc kubenswrapper[4889]: I1128 07:09:30.974583 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.71866496 podStartE2EDuration="7.974565951s" podCreationTimestamp="2025-11-28 07:09:23 +0000 UTC" firstStartedPulling="2025-11-28 07:09:24.76623229 +0000 UTC m=+1287.736466445" lastFinishedPulling="2025-11-28 07:09:30.022133281 +0000 UTC m=+1292.992367436" observedRunningTime="2025-11-28 07:09:30.970924783 +0000 UTC m=+1293.941158948" watchObservedRunningTime="2025-11-28 07:09:30.974565951 +0000 UTC m=+1293.944800106" Nov 28 07:09:31 crc kubenswrapper[4889]: I1128 07:09:31.006410 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.006395727 podStartE2EDuration="4.006395727s" podCreationTimestamp="2025-11-28 07:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:31.003327793 +0000 UTC m=+1293.973561948" watchObservedRunningTime="2025-11-28 07:09:31.006395727 +0000 UTC m=+1293.976629882" Nov 28 07:09:31 crc kubenswrapper[4889]: I1128 07:09:31.970322 4889 generic.go:334] "Generic (PLEG): container finished" podID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerID="2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784" exitCode=0 Nov 28 07:09:31 crc kubenswrapper[4889]: I1128 07:09:31.970580 4889 generic.go:334] "Generic (PLEG): container finished" podID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerID="0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24" exitCode=2 Nov 28 07:09:31 crc kubenswrapper[4889]: I1128 07:09:31.970588 4889 generic.go:334] "Generic (PLEG): container finished" podID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerID="b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d" exitCode=0 Nov 28 07:09:31 crc kubenswrapper[4889]: I1128 07:09:31.970382 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerDied","Data":"2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784"} Nov 28 07:09:31 crc kubenswrapper[4889]: I1128 07:09:31.970643 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerDied","Data":"0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24"} Nov 28 07:09:31 crc kubenswrapper[4889]: I1128 07:09:31.970683 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerDied","Data":"b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d"} Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.589933 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.696785 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-scripts\") pod \"f9b18624-0a09-4827-b69a-f3831bf83f06\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.696956 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-sg-core-conf-yaml\") pod \"f9b18624-0a09-4827-b69a-f3831bf83f06\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.697018 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-log-httpd\") pod \"f9b18624-0a09-4827-b69a-f3831bf83f06\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.697038 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvxh8\" (UniqueName: \"kubernetes.io/projected/f9b18624-0a09-4827-b69a-f3831bf83f06-kube-api-access-cvxh8\") pod \"f9b18624-0a09-4827-b69a-f3831bf83f06\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.697060 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-config-data\") pod \"f9b18624-0a09-4827-b69a-f3831bf83f06\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.697113 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-run-httpd\") pod \"f9b18624-0a09-4827-b69a-f3831bf83f06\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.697142 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-combined-ca-bundle\") pod \"f9b18624-0a09-4827-b69a-f3831bf83f06\" (UID: \"f9b18624-0a09-4827-b69a-f3831bf83f06\") " Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.697500 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9b18624-0a09-4827-b69a-f3831bf83f06" (UID: "f9b18624-0a09-4827-b69a-f3831bf83f06"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.697562 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9b18624-0a09-4827-b69a-f3831bf83f06" (UID: "f9b18624-0a09-4827-b69a-f3831bf83f06"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.702651 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-scripts" (OuterVolumeSpecName: "scripts") pod "f9b18624-0a09-4827-b69a-f3831bf83f06" (UID: "f9b18624-0a09-4827-b69a-f3831bf83f06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.709061 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b18624-0a09-4827-b69a-f3831bf83f06-kube-api-access-cvxh8" (OuterVolumeSpecName: "kube-api-access-cvxh8") pod "f9b18624-0a09-4827-b69a-f3831bf83f06" (UID: "f9b18624-0a09-4827-b69a-f3831bf83f06"). InnerVolumeSpecName "kube-api-access-cvxh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.723776 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9b18624-0a09-4827-b69a-f3831bf83f06" (UID: "f9b18624-0a09-4827-b69a-f3831bf83f06"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.771135 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9b18624-0a09-4827-b69a-f3831bf83f06" (UID: "f9b18624-0a09-4827-b69a-f3831bf83f06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.799079 4889 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.799117 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvxh8\" (UniqueName: \"kubernetes.io/projected/f9b18624-0a09-4827-b69a-f3831bf83f06-kube-api-access-cvxh8\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.799132 4889 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b18624-0a09-4827-b69a-f3831bf83f06-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.799143 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.799153 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.799163 4889 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.803403 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-config-data" (OuterVolumeSpecName: "config-data") pod "f9b18624-0a09-4827-b69a-f3831bf83f06" (UID: "f9b18624-0a09-4827-b69a-f3831bf83f06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:34 crc kubenswrapper[4889]: I1128 07:09:34.901362 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b18624-0a09-4827-b69a-f3831bf83f06-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.003350 4889 generic.go:334] "Generic (PLEG): container finished" podID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerID="ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed" exitCode=0 Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.003395 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerDied","Data":"ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed"} Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.003418 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.003435 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b18624-0a09-4827-b69a-f3831bf83f06","Type":"ContainerDied","Data":"fc3411a1f4a53a63a5bd36c164af8d92a746e9f5923ad410dde4647a049c8d49"} Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.003454 4889 scope.go:117] "RemoveContainer" containerID="2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.029483 4889 scope.go:117] "RemoveContainer" containerID="0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.042746 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.064613 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.080126 4889 scope.go:117] "RemoveContainer" containerID="b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.099828 4889 scope.go:117] "RemoveContainer" containerID="ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.107986 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:35 crc kubenswrapper[4889]: E1128 07:09:35.108413 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="ceilometer-notification-agent" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.108446 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="ceilometer-notification-agent" Nov 28 07:09:35 crc kubenswrapper[4889]: E1128 07:09:35.108474 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="ceilometer-central-agent" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.108484 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="ceilometer-central-agent" Nov 28 07:09:35 crc kubenswrapper[4889]: E1128 07:09:35.108507 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="proxy-httpd" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.108517 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="proxy-httpd" Nov 28 07:09:35 crc kubenswrapper[4889]: E1128 07:09:35.108544 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="sg-core" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.108573 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="sg-core" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.108796 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="proxy-httpd" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.108826 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="sg-core" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.108846 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="ceilometer-central-agent" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.108872 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" containerName="ceilometer-notification-agent" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.111018 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.113826 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.113841 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.119443 4889 scope.go:117] "RemoveContainer" containerID="2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784" Nov 28 07:09:35 crc kubenswrapper[4889]: E1128 07:09:35.119899 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784\": container with ID starting with 2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784 not found: ID does not exist" containerID="2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.119928 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784"} err="failed to get container status \"2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784\": rpc error: code = NotFound desc = could not find container \"2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784\": container with ID starting with 2ff60cb70be181c44b9833eea4036fdd165f9e32862a53da5c272992b0de8784 not found: ID does not exist" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.119945 4889 scope.go:117] "RemoveContainer" containerID="0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.119943 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:35 crc kubenswrapper[4889]: E1128 07:09:35.120260 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24\": container with ID starting with 0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24 not found: ID does not exist" containerID="0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.120288 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24"} err="failed to get container status \"0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24\": rpc error: code = NotFound desc = could not find container \"0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24\": container with ID starting with 0f198d6530d38f0ac14c581514f9e5fc97dcc7de4c17bf509ea3977627624c24 not found: ID does not exist" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.120300 4889 scope.go:117] "RemoveContainer" containerID="b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d" Nov 28 07:09:35 crc kubenswrapper[4889]: E1128 07:09:35.120622 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d\": container with ID starting with b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d not found: ID does not exist" containerID="b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.120641 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d"} err="failed to get container status \"b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d\": rpc error: code = NotFound desc = could not find container \"b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d\": container with ID starting with b95843f6676be4d4809c34ffd3502edc17401662e2300876479e42574d14208d not found: ID does not exist" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.120653 4889 scope.go:117] "RemoveContainer" containerID="ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed" Nov 28 07:09:35 crc kubenswrapper[4889]: E1128 07:09:35.120927 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed\": container with ID starting with ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed not found: ID does not exist" containerID="ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.120942 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed"} err="failed to get container status \"ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed\": rpc error: code = NotFound desc = could not find container \"ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed\": container with ID starting with ccaf0976edb9c8ced82c0fe468fbc764df23bf7cc40933db37e02382f12274ed not found: ID does not exist" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.271249 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.271289 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.297833 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.308286 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-scripts\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.308363 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.308406 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-log-httpd\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.308425 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-run-httpd\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.309085 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.309144 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg2xw\" (UniqueName: \"kubernetes.io/projected/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-kube-api-access-cg2xw\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.309253 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-config-data\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.311144 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.343482 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b18624-0a09-4827-b69a-f3831bf83f06" path="/var/lib/kubelet/pods/f9b18624-0a09-4827-b69a-f3831bf83f06/volumes" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.410879 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-log-httpd\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.410937 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-run-httpd\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.410978 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.411005 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg2xw\" (UniqueName: \"kubernetes.io/projected/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-kube-api-access-cg2xw\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.411122 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-config-data\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.411167 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-scripts\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.411264 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.411456 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-log-httpd\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.411919 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-run-httpd\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.415740 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.416791 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-scripts\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.417302 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.422981 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-config-data\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.431118 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg2xw\" (UniqueName: \"kubernetes.io/projected/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-kube-api-access-cg2xw\") pod \"ceilometer-0\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " pod="openstack/ceilometer-0" Nov 28 07:09:35 crc kubenswrapper[4889]: I1128 07:09:35.728314 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:09:36 crc kubenswrapper[4889]: I1128 07:09:36.013832 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 07:09:36 crc kubenswrapper[4889]: I1128 07:09:36.013875 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 07:09:36 crc kubenswrapper[4889]: I1128 07:09:36.172723 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:09:37 crc kubenswrapper[4889]: I1128 07:09:37.024088 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerStarted","Data":"842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531"} Nov 28 07:09:37 crc kubenswrapper[4889]: I1128 07:09:37.024410 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerStarted","Data":"763282dec2e397cfe16354e2d6f41c14d40d18307eef9d362a602c5d1f4654f8"} Nov 28 07:09:37 crc kubenswrapper[4889]: I1128 07:09:37.996991 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 07:09:38 crc kubenswrapper[4889]: I1128 07:09:38.000394 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 07:09:38 crc kubenswrapper[4889]: I1128 07:09:38.048146 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerStarted","Data":"efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9"} Nov 28 07:09:38 crc kubenswrapper[4889]: I1128 07:09:38.050001 4889 generic.go:334] "Generic (PLEG): container finished" podID="a31734ad-17b6-497c-a83b-3e960ff9291c" containerID="626370a08a887e0cf879525f92156e1e49617b4c81dc57238c397d3e1a16d956" exitCode=0 Nov 28 07:09:38 crc kubenswrapper[4889]: I1128 07:09:38.050199 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sd5js" event={"ID":"a31734ad-17b6-497c-a83b-3e960ff9291c","Type":"ContainerDied","Data":"626370a08a887e0cf879525f92156e1e49617b4c81dc57238c397d3e1a16d956"} Nov 28 07:09:38 crc kubenswrapper[4889]: I1128 07:09:38.336067 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:38 crc kubenswrapper[4889]: I1128 07:09:38.336112 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:38 crc kubenswrapper[4889]: I1128 07:09:38.375347 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:38 crc kubenswrapper[4889]: I1128 07:09:38.379181 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.062365 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerStarted","Data":"41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca"} Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.063351 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.063425 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.397774 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.483097 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-combined-ca-bundle\") pod \"a31734ad-17b6-497c-a83b-3e960ff9291c\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.484004 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data\") pod \"a31734ad-17b6-497c-a83b-3e960ff9291c\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.484097 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-scripts\") pod \"a31734ad-17b6-497c-a83b-3e960ff9291c\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.484142 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbqmt\" (UniqueName: \"kubernetes.io/projected/a31734ad-17b6-497c-a83b-3e960ff9291c-kube-api-access-bbqmt\") pod \"a31734ad-17b6-497c-a83b-3e960ff9291c\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.492930 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31734ad-17b6-497c-a83b-3e960ff9291c-kube-api-access-bbqmt" (OuterVolumeSpecName: "kube-api-access-bbqmt") pod "a31734ad-17b6-497c-a83b-3e960ff9291c" (UID: "a31734ad-17b6-497c-a83b-3e960ff9291c"). InnerVolumeSpecName "kube-api-access-bbqmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.500206 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-scripts" (OuterVolumeSpecName: "scripts") pod "a31734ad-17b6-497c-a83b-3e960ff9291c" (UID: "a31734ad-17b6-497c-a83b-3e960ff9291c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:39 crc kubenswrapper[4889]: E1128 07:09:39.513223 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data podName:a31734ad-17b6-497c-a83b-3e960ff9291c nodeName:}" failed. No retries permitted until 2025-11-28 07:09:40.01320269 +0000 UTC m=+1302.983436845 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data") pod "a31734ad-17b6-497c-a83b-3e960ff9291c" (UID: "a31734ad-17b6-497c-a83b-3e960ff9291c") : error deleting /var/lib/kubelet/pods/a31734ad-17b6-497c-a83b-3e960ff9291c/volume-subpaths: remove /var/lib/kubelet/pods/a31734ad-17b6-497c-a83b-3e960ff9291c/volume-subpaths: no such file or directory Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.516162 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a31734ad-17b6-497c-a83b-3e960ff9291c" (UID: "a31734ad-17b6-497c-a83b-3e960ff9291c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.586780 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.586812 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:39 crc kubenswrapper[4889]: I1128 07:09:39.586821 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbqmt\" (UniqueName: \"kubernetes.io/projected/a31734ad-17b6-497c-a83b-3e960ff9291c-kube-api-access-bbqmt\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.074555 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sd5js" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.074627 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sd5js" event={"ID":"a31734ad-17b6-497c-a83b-3e960ff9291c","Type":"ContainerDied","Data":"24517b8eabd90dc6b806690435e1fd01e424055eb5ca9a84e50efb6a9c53ed6e"} Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.075022 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24517b8eabd90dc6b806690435e1fd01e424055eb5ca9a84e50efb6a9c53ed6e" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.094592 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data\") pod \"a31734ad-17b6-497c-a83b-3e960ff9291c\" (UID: \"a31734ad-17b6-497c-a83b-3e960ff9291c\") " Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.101257 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data" (OuterVolumeSpecName: "config-data") pod "a31734ad-17b6-497c-a83b-3e960ff9291c" (UID: "a31734ad-17b6-497c-a83b-3e960ff9291c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.187573 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:09:40 crc kubenswrapper[4889]: E1128 07:09:40.188086 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31734ad-17b6-497c-a83b-3e960ff9291c" containerName="nova-cell0-conductor-db-sync" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.188107 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31734ad-17b6-497c-a83b-3e960ff9291c" containerName="nova-cell0-conductor-db-sync" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.188313 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31734ad-17b6-497c-a83b-3e960ff9291c" containerName="nova-cell0-conductor-db-sync" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.189123 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.196878 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31734ad-17b6-497c-a83b-3e960ff9291c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.211434 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.298428 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.298500 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpvz7\" (UniqueName: \"kubernetes.io/projected/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-kube-api-access-bpvz7\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.298665 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.400818 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.400930 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpvz7\" (UniqueName: \"kubernetes.io/projected/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-kube-api-access-bpvz7\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.401068 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.407783 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.415444 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.429015 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpvz7\" (UniqueName: \"kubernetes.io/projected/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-kube-api-access-bpvz7\") pod \"nova-cell0-conductor-0\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:40 crc kubenswrapper[4889]: I1128 07:09:40.560125 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:41 crc kubenswrapper[4889]: I1128 07:09:41.057673 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:09:41 crc kubenswrapper[4889]: W1128 07:09:41.059485 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ca42308_451d_48e1_a74f_2c7ce6c6a53a.slice/crio-ceb2590fecbc9729a9dc8fe3d73ce534db21c586e40793159e8f616d4103580d WatchSource:0}: Error finding container ceb2590fecbc9729a9dc8fe3d73ce534db21c586e40793159e8f616d4103580d: Status 404 returned error can't find the container with id ceb2590fecbc9729a9dc8fe3d73ce534db21c586e40793159e8f616d4103580d Nov 28 07:09:41 crc kubenswrapper[4889]: I1128 07:09:41.085102 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerStarted","Data":"2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20"} Nov 28 07:09:41 crc kubenswrapper[4889]: I1128 07:09:41.085294 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:09:41 crc kubenswrapper[4889]: I1128 07:09:41.086578 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0ca42308-451d-48e1-a74f-2c7ce6c6a53a","Type":"ContainerStarted","Data":"ceb2590fecbc9729a9dc8fe3d73ce534db21c586e40793159e8f616d4103580d"} Nov 28 07:09:41 crc kubenswrapper[4889]: I1128 07:09:41.121509 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.319610692 podStartE2EDuration="6.121493432s" podCreationTimestamp="2025-11-28 07:09:35 +0000 UTC" firstStartedPulling="2025-11-28 07:09:36.178556503 +0000 UTC m=+1299.148790658" lastFinishedPulling="2025-11-28 07:09:39.980439243 +0000 UTC m=+1302.950673398" observedRunningTime="2025-11-28 07:09:41.115894937 +0000 UTC m=+1304.086129092" watchObservedRunningTime="2025-11-28 07:09:41.121493432 +0000 UTC m=+1304.091727587" Nov 28 07:09:41 crc kubenswrapper[4889]: I1128 07:09:41.142260 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:41 crc kubenswrapper[4889]: I1128 07:09:41.142433 4889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 07:09:41 crc kubenswrapper[4889]: I1128 07:09:41.143298 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 07:09:42 crc kubenswrapper[4889]: I1128 07:09:42.096914 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0ca42308-451d-48e1-a74f-2c7ce6c6a53a","Type":"ContainerStarted","Data":"0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10"} Nov 28 07:09:42 crc kubenswrapper[4889]: I1128 07:09:42.123512 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.123493845 podStartE2EDuration="2.123493845s" podCreationTimestamp="2025-11-28 07:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:42.121458336 +0000 UTC m=+1305.091692501" watchObservedRunningTime="2025-11-28 07:09:42.123493845 +0000 UTC m=+1305.093728000" Nov 28 07:09:43 crc kubenswrapper[4889]: I1128 07:09:43.105416 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:50 crc kubenswrapper[4889]: I1128 07:09:50.587558 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.017780 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pnfxg"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.018962 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.035984 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.036220 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.056768 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pnfxg"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.099726 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-config-data\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.099794 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-scripts\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.099813 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.099853 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqb7\" (UniqueName: \"kubernetes.io/projected/c93fa965-a510-4c13-b946-51150bd493e1-kube-api-access-knqb7\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.202077 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-config-data\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.202160 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-scripts\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.202184 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.202223 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knqb7\" (UniqueName: \"kubernetes.io/projected/c93fa965-a510-4c13-b946-51150bd493e1-kube-api-access-knqb7\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.214438 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-scripts\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.214959 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.216059 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.229285 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.236769 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.253027 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-config-data\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.265967 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.280300 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knqb7\" (UniqueName: \"kubernetes.io/projected/c93fa965-a510-4c13-b946-51150bd493e1-kube-api-access-knqb7\") pod \"nova-cell0-cell-mapping-pnfxg\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.305933 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7zhz\" (UniqueName: \"kubernetes.io/projected/b374792e-0c42-49c4-90ee-cdb3a872ecae-kube-api-access-m7zhz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.305992 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.306014 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.340559 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.415168 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7zhz\" (UniqueName: \"kubernetes.io/projected/b374792e-0c42-49c4-90ee-cdb3a872ecae-kube-api-access-m7zhz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.415223 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.415246 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.431180 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.432506 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.433765 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.434212 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.438981 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.439322 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.439414 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.445509 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.446146 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.466559 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.503866 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7zhz\" (UniqueName: \"kubernetes.io/projected/b374792e-0c42-49c4-90ee-cdb3a872ecae-kube-api-access-m7zhz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.518651 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-config-data\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.518770 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.518811 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt22w\" (UniqueName: \"kubernetes.io/projected/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-kube-api-access-lt22w\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.518837 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zmw7\" (UniqueName: \"kubernetes.io/projected/75bac447-8979-45d5-a2fd-e22d83e1b001-kube-api-access-7zmw7\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.518852 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-logs\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.518902 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-config-data\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.518939 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bac447-8979-45d5-a2fd-e22d83e1b001-logs\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.518958 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.610623 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.612676 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.622682 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.622793 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt22w\" (UniqueName: \"kubernetes.io/projected/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-kube-api-access-lt22w\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.622834 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-logs\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.622854 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zmw7\" (UniqueName: \"kubernetes.io/projected/75bac447-8979-45d5-a2fd-e22d83e1b001-kube-api-access-7zmw7\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.622987 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-config-data\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.623056 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bac447-8979-45d5-a2fd-e22d83e1b001-logs\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.623093 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.623144 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-config-data\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.630997 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bac447-8979-45d5-a2fd-e22d83e1b001-logs\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.637629 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-config-data\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.638442 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.641646 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-config-data\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.641841 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.644626 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-logs\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.654536 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.655326 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.656116 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt22w\" (UniqueName: \"kubernetes.io/projected/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-kube-api-access-lt22w\") pod \"nova-metadata-0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.660423 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zmw7\" (UniqueName: \"kubernetes.io/projected/75bac447-8979-45d5-a2fd-e22d83e1b001-kube-api-access-7zmw7\") pod \"nova-api-0\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.678298 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.679566 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.714889 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5594d9b959-mlsc9"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.717187 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.726688 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5594d9b959-mlsc9"] Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.726995 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.727028 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-config-data\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.727048 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9z9c\" (UniqueName: \"kubernetes.io/projected/0dffae5b-ddfc-4c8a-81d5-832bf584f779-kube-api-access-k9z9c\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828141 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-config\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828206 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-nb\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828236 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-svc\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828320 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828347 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-config-data\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828369 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9z9c\" (UniqueName: \"kubernetes.io/projected/0dffae5b-ddfc-4c8a-81d5-832bf584f779-kube-api-access-k9z9c\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828400 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-sb\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828439 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-swift-storage-0\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.828457 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdbt\" (UniqueName: \"kubernetes.io/projected/a16d6068-941b-4d5f-a74c-42e363182095-kube-api-access-ljdbt\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.834054 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-config-data\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.846405 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.850160 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9z9c\" (UniqueName: \"kubernetes.io/projected/0dffae5b-ddfc-4c8a-81d5-832bf584f779-kube-api-access-k9z9c\") pod \"nova-scheduler-0\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " pod="openstack/nova-scheduler-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.930088 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-sb\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.930169 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-swift-storage-0\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.930193 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljdbt\" (UniqueName: \"kubernetes.io/projected/a16d6068-941b-4d5f-a74c-42e363182095-kube-api-access-ljdbt\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.930370 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-config\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.930426 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-nb\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.930456 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-svc\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.931487 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-swift-storage-0\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.931506 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-nb\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.931487 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-sb\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.931561 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-config\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.932481 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-svc\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.942908 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.947162 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljdbt\" (UniqueName: \"kubernetes.io/projected/a16d6068-941b-4d5f-a74c-42e363182095-kube-api-access-ljdbt\") pod \"dnsmasq-dns-5594d9b959-mlsc9\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:51 crc kubenswrapper[4889]: I1128 07:09:51.978976 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.054776 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.174676 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pnfxg"] Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.192391 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rkslq"] Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.193733 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.196088 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.196354 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 07:09:52 crc kubenswrapper[4889]: W1128 07:09:52.199923 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc93fa965_a510_4c13_b946_51150bd493e1.slice/crio-f38336abec4fb1c6f0635db7883b9d90129866161d63755f0314a2af58139018 WatchSource:0}: Error finding container f38336abec4fb1c6f0635db7883b9d90129866161d63755f0314a2af58139018: Status 404 returned error can't find the container with id f38336abec4fb1c6f0635db7883b9d90129866161d63755f0314a2af58139018 Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.231249 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rkslq"] Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.238890 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.238944 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl4xw\" (UniqueName: \"kubernetes.io/projected/da834219-0eb6-44f4-9e57-81a4ef2c201c-kube-api-access-jl4xw\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.239240 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-config-data\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.239301 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-scripts\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.297934 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.310781 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.341401 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-config-data\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.341456 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-scripts\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.341575 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.341613 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl4xw\" (UniqueName: \"kubernetes.io/projected/da834219-0eb6-44f4-9e57-81a4ef2c201c-kube-api-access-jl4xw\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.350003 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-scripts\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.353446 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.370549 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl4xw\" (UniqueName: \"kubernetes.io/projected/da834219-0eb6-44f4-9e57-81a4ef2c201c-kube-api-access-jl4xw\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.385132 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-config-data\") pod \"nova-cell1-conductor-db-sync-rkslq\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.455429 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.547281 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.634010 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:09:52 crc kubenswrapper[4889]: W1128 07:09:52.709650 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda16d6068_941b_4d5f_a74c_42e363182095.slice/crio-d50d03cd0ef1d97a5d2a01f7238a5febfecac201dace7765fbd6a1518cf87ddf WatchSource:0}: Error finding container d50d03cd0ef1d97a5d2a01f7238a5febfecac201dace7765fbd6a1518cf87ddf: Status 404 returned error can't find the container with id d50d03cd0ef1d97a5d2a01f7238a5febfecac201dace7765fbd6a1518cf87ddf Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.722202 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5594d9b959-mlsc9"] Nov 28 07:09:52 crc kubenswrapper[4889]: I1128 07:09:52.927029 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rkslq"] Nov 28 07:09:53 crc kubenswrapper[4889]: I1128 07:09:53.211232 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75bac447-8979-45d5-a2fd-e22d83e1b001","Type":"ContainerStarted","Data":"fe30a95beb2bdad42f4d5724ea24a45ebbf30460b460032c2ff94362f5479347"} Nov 28 07:09:53 crc kubenswrapper[4889]: I1128 07:09:53.212838 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rkslq" event={"ID":"da834219-0eb6-44f4-9e57-81a4ef2c201c","Type":"ContainerStarted","Data":"b2306de6732c7a22cdf3f503795e3846a32f7b4c3ee5a991e166c3b8514dbf7f"} Nov 28 07:09:53 crc kubenswrapper[4889]: I1128 07:09:53.213953 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0","Type":"ContainerStarted","Data":"a6096ab344644ccbdf7be241559f29c041eca24699e28fbeb0bbb8dbc749f130"} Nov 28 07:09:53 crc kubenswrapper[4889]: I1128 07:09:53.215210 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pnfxg" event={"ID":"c93fa965-a510-4c13-b946-51150bd493e1","Type":"ContainerStarted","Data":"f38336abec4fb1c6f0635db7883b9d90129866161d63755f0314a2af58139018"} Nov 28 07:09:53 crc kubenswrapper[4889]: I1128 07:09:53.216210 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" event={"ID":"a16d6068-941b-4d5f-a74c-42e363182095","Type":"ContainerStarted","Data":"d50d03cd0ef1d97a5d2a01f7238a5febfecac201dace7765fbd6a1518cf87ddf"} Nov 28 07:09:53 crc kubenswrapper[4889]: I1128 07:09:53.217273 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dffae5b-ddfc-4c8a-81d5-832bf584f779","Type":"ContainerStarted","Data":"71de98606b0e3ba9d559ce51c8bc35d971af9da1a5c3b109580660077663b646"} Nov 28 07:09:53 crc kubenswrapper[4889]: I1128 07:09:53.218737 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b374792e-0c42-49c4-90ee-cdb3a872ecae","Type":"ContainerStarted","Data":"d83644cf1c47a6d2d652a09d1975555ea4a9c41c414d13d51cf6dd1081e3d851"} Nov 28 07:09:54 crc kubenswrapper[4889]: I1128 07:09:54.229222 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rkslq" event={"ID":"da834219-0eb6-44f4-9e57-81a4ef2c201c","Type":"ContainerStarted","Data":"e5eff60d8d8d77100dc3390741635f318a7f480049c275a81964aa7dfa36c631"} Nov 28 07:09:54 crc kubenswrapper[4889]: I1128 07:09:54.233801 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pnfxg" event={"ID":"c93fa965-a510-4c13-b946-51150bd493e1","Type":"ContainerStarted","Data":"d8b7dcde5ba3efb58f541995114142c3cd0d5e253a1a00051e1b96d3c29ecbaa"} Nov 28 07:09:54 crc kubenswrapper[4889]: I1128 07:09:54.238454 4889 generic.go:334] "Generic (PLEG): container finished" podID="a16d6068-941b-4d5f-a74c-42e363182095" containerID="3dc327a27cf51a0ade520631c8b7fc4d421557f1c38ffb141e797fc692974432" exitCode=0 Nov 28 07:09:54 crc kubenswrapper[4889]: I1128 07:09:54.238503 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" event={"ID":"a16d6068-941b-4d5f-a74c-42e363182095","Type":"ContainerDied","Data":"3dc327a27cf51a0ade520631c8b7fc4d421557f1c38ffb141e797fc692974432"} Nov 28 07:09:54 crc kubenswrapper[4889]: I1128 07:09:54.252933 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rkslq" podStartSLOduration=2.252912044 podStartE2EDuration="2.252912044s" podCreationTimestamp="2025-11-28 07:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:54.243507548 +0000 UTC m=+1317.213741703" watchObservedRunningTime="2025-11-28 07:09:54.252912044 +0000 UTC m=+1317.223146199" Nov 28 07:09:54 crc kubenswrapper[4889]: I1128 07:09:54.273308 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pnfxg" podStartSLOduration=4.273257754 podStartE2EDuration="4.273257754s" podCreationTimestamp="2025-11-28 07:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:54.264299778 +0000 UTC m=+1317.234533933" watchObservedRunningTime="2025-11-28 07:09:54.273257754 +0000 UTC m=+1317.243491909" Nov 28 07:09:55 crc kubenswrapper[4889]: I1128 07:09:55.589387 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:09:55 crc kubenswrapper[4889]: I1128 07:09:55.611508 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.279741 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b374792e-0c42-49c4-90ee-cdb3a872ecae","Type":"ContainerStarted","Data":"12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00"} Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.280616 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b374792e-0c42-49c4-90ee-cdb3a872ecae" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00" gracePeriod=30 Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.286405 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75bac447-8979-45d5-a2fd-e22d83e1b001","Type":"ContainerStarted","Data":"f74379e32f1e90e6851c1cc759b79adaa7b86d8d5bc59b8c75de94d7ca32b4cf"} Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.289995 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0","Type":"ContainerStarted","Data":"1bf25ceacbf58c1540af4c922a026c68f36f9f68df9ad762d1f54361fb401163"} Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.297279 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" event={"ID":"a16d6068-941b-4d5f-a74c-42e363182095","Type":"ContainerStarted","Data":"29c66f3ede3318f319febf94be785cfc6184dc3969697eb9b727d92f7dbc4ea3"} Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.297900 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.304096 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.881340699 podStartE2EDuration="6.30407412s" podCreationTimestamp="2025-11-28 07:09:51 +0000 UTC" firstStartedPulling="2025-11-28 07:09:52.307231943 +0000 UTC m=+1315.277466098" lastFinishedPulling="2025-11-28 07:09:56.729965364 +0000 UTC m=+1319.700199519" observedRunningTime="2025-11-28 07:09:57.30201422 +0000 UTC m=+1320.272248395" watchObservedRunningTime="2025-11-28 07:09:57.30407412 +0000 UTC m=+1320.274308275" Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.309969 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dffae5b-ddfc-4c8a-81d5-832bf584f779","Type":"ContainerStarted","Data":"116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4"} Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.334221 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" podStartSLOduration=6.334204045 podStartE2EDuration="6.334204045s" podCreationTimestamp="2025-11-28 07:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:57.330386153 +0000 UTC m=+1320.300620308" watchObservedRunningTime="2025-11-28 07:09:57.334204045 +0000 UTC m=+1320.304438200" Nov 28 07:09:57 crc kubenswrapper[4889]: I1128 07:09:57.354416 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.174206806 podStartE2EDuration="6.354404501s" podCreationTimestamp="2025-11-28 07:09:51 +0000 UTC" firstStartedPulling="2025-11-28 07:09:52.553163461 +0000 UTC m=+1315.523397616" lastFinishedPulling="2025-11-28 07:09:56.733361156 +0000 UTC m=+1319.703595311" observedRunningTime="2025-11-28 07:09:57.351157613 +0000 UTC m=+1320.321391778" watchObservedRunningTime="2025-11-28 07:09:57.354404501 +0000 UTC m=+1320.324638656" Nov 28 07:09:58 crc kubenswrapper[4889]: I1128 07:09:58.325498 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75bac447-8979-45d5-a2fd-e22d83e1b001","Type":"ContainerStarted","Data":"44b7db3182088e18f98c4148c5c8466a90e1d81667a7e51d563183b230863828"} Nov 28 07:09:58 crc kubenswrapper[4889]: I1128 07:09:58.328966 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0","Type":"ContainerStarted","Data":"65abebeb481fdc5e543631f529a5928f29692c01f9997d7f5a841d13781bcbdd"} Nov 28 07:09:58 crc kubenswrapper[4889]: I1128 07:09:58.329248 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerName="nova-metadata-log" containerID="cri-o://1bf25ceacbf58c1540af4c922a026c68f36f9f68df9ad762d1f54361fb401163" gracePeriod=30 Nov 28 07:09:58 crc kubenswrapper[4889]: I1128 07:09:58.329347 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerName="nova-metadata-metadata" containerID="cri-o://65abebeb481fdc5e543631f529a5928f29692c01f9997d7f5a841d13781bcbdd" gracePeriod=30 Nov 28 07:09:58 crc kubenswrapper[4889]: I1128 07:09:58.348337 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.264964674 podStartE2EDuration="7.348314128s" podCreationTimestamp="2025-11-28 07:09:51 +0000 UTC" firstStartedPulling="2025-11-28 07:09:52.650111174 +0000 UTC m=+1315.620345329" lastFinishedPulling="2025-11-28 07:09:56.733460628 +0000 UTC m=+1319.703694783" observedRunningTime="2025-11-28 07:09:58.346634588 +0000 UTC m=+1321.316868743" watchObservedRunningTime="2025-11-28 07:09:58.348314128 +0000 UTC m=+1321.318548283" Nov 28 07:09:58 crc kubenswrapper[4889]: I1128 07:09:58.378124 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.931317895 podStartE2EDuration="7.378104185s" podCreationTimestamp="2025-11-28 07:09:51 +0000 UTC" firstStartedPulling="2025-11-28 07:09:52.282493067 +0000 UTC m=+1315.252727222" lastFinishedPulling="2025-11-28 07:09:56.729279357 +0000 UTC m=+1319.699513512" observedRunningTime="2025-11-28 07:09:58.376132468 +0000 UTC m=+1321.346366623" watchObservedRunningTime="2025-11-28 07:09:58.378104185 +0000 UTC m=+1321.348338340" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.357620 4889 generic.go:334] "Generic (PLEG): container finished" podID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerID="65abebeb481fdc5e543631f529a5928f29692c01f9997d7f5a841d13781bcbdd" exitCode=0 Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.359109 4889 generic.go:334] "Generic (PLEG): container finished" podID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerID="1bf25ceacbf58c1540af4c922a026c68f36f9f68df9ad762d1f54361fb401163" exitCode=143 Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.359446 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0","Type":"ContainerDied","Data":"65abebeb481fdc5e543631f529a5928f29692c01f9997d7f5a841d13781bcbdd"} Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.359598 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0","Type":"ContainerDied","Data":"1bf25ceacbf58c1540af4c922a026c68f36f9f68df9ad762d1f54361fb401163"} Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.431562 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.511893 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-config-data\") pod \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.512083 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt22w\" (UniqueName: \"kubernetes.io/projected/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-kube-api-access-lt22w\") pod \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.512126 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-logs\") pod \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.512215 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-combined-ca-bundle\") pod \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\" (UID: \"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0\") " Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.512529 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-logs" (OuterVolumeSpecName: "logs") pod "8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" (UID: "8f47ba64-a74f-4c79-9f58-00ac8c62a3c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.513995 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.518270 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-kube-api-access-lt22w" (OuterVolumeSpecName: "kube-api-access-lt22w") pod "8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" (UID: "8f47ba64-a74f-4c79-9f58-00ac8c62a3c0"). InnerVolumeSpecName "kube-api-access-lt22w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.542801 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-config-data" (OuterVolumeSpecName: "config-data") pod "8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" (UID: "8f47ba64-a74f-4c79-9f58-00ac8c62a3c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.552337 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" (UID: "8f47ba64-a74f-4c79-9f58-00ac8c62a3c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.615524 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.615766 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt22w\" (UniqueName: \"kubernetes.io/projected/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-kube-api-access-lt22w\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:59 crc kubenswrapper[4889]: I1128 07:09:59.615834 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.369085 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f47ba64-a74f-4c79-9f58-00ac8c62a3c0","Type":"ContainerDied","Data":"a6096ab344644ccbdf7be241559f29c041eca24699e28fbeb0bbb8dbc749f130"} Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.369273 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.370231 4889 scope.go:117] "RemoveContainer" containerID="65abebeb481fdc5e543631f529a5928f29692c01f9997d7f5a841d13781bcbdd" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.399884 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.407853 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.409988 4889 scope.go:117] "RemoveContainer" containerID="1bf25ceacbf58c1540af4c922a026c68f36f9f68df9ad762d1f54361fb401163" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.428398 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:00 crc kubenswrapper[4889]: E1128 07:10:00.428909 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerName="nova-metadata-metadata" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.428931 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerName="nova-metadata-metadata" Nov 28 07:10:00 crc kubenswrapper[4889]: E1128 07:10:00.428962 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerName="nova-metadata-log" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.428971 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerName="nova-metadata-log" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.429202 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerName="nova-metadata-metadata" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.429224 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" containerName="nova-metadata-log" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.430403 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.432975 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.439230 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.446090 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.533220 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxxbr\" (UniqueName: \"kubernetes.io/projected/aa35605b-304a-4198-84a9-17805d84a349-kube-api-access-dxxbr\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.533306 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-config-data\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.533393 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.533591 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.533757 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa35605b-304a-4198-84a9-17805d84a349-logs\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.635042 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.635110 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa35605b-304a-4198-84a9-17805d84a349-logs\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.635188 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxxbr\" (UniqueName: \"kubernetes.io/projected/aa35605b-304a-4198-84a9-17805d84a349-kube-api-access-dxxbr\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.635229 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-config-data\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.635255 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.635619 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa35605b-304a-4198-84a9-17805d84a349-logs\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.643452 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.643594 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.651392 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-config-data\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.651766 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxxbr\" (UniqueName: \"kubernetes.io/projected/aa35605b-304a-4198-84a9-17805d84a349-kube-api-access-dxxbr\") pod \"nova-metadata-0\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " pod="openstack/nova-metadata-0" Nov 28 07:10:00 crc kubenswrapper[4889]: I1128 07:10:00.749482 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:01 crc kubenswrapper[4889]: I1128 07:10:01.241142 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:01 crc kubenswrapper[4889]: I1128 07:10:01.347954 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f47ba64-a74f-4c79-9f58-00ac8c62a3c0" path="/var/lib/kubelet/pods/8f47ba64-a74f-4c79-9f58-00ac8c62a3c0/volumes" Nov 28 07:10:01 crc kubenswrapper[4889]: I1128 07:10:01.392282 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa35605b-304a-4198-84a9-17805d84a349","Type":"ContainerStarted","Data":"e320775ac2e31670491a053a67626fc534d5011603a4e96e0753059a9150f0e7"} Nov 28 07:10:01 crc kubenswrapper[4889]: I1128 07:10:01.681570 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:01 crc kubenswrapper[4889]: I1128 07:10:01.943749 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:10:01 crc kubenswrapper[4889]: I1128 07:10:01.943794 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:10:01 crc kubenswrapper[4889]: I1128 07:10:01.980172 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 07:10:01 crc kubenswrapper[4889]: I1128 07:10:01.980431 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.013886 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.057537 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.136324 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8ccb5c7cf-k9n2l"] Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.136592 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" podUID="ed8cab85-cee8-4604-9898-9215c05dbe9d" containerName="dnsmasq-dns" containerID="cri-o://fcb4b482e76a139224c75dd763c6b8737558702df24d0451a0c3c91427c4ae02" gracePeriod=10 Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.405913 4889 generic.go:334] "Generic (PLEG): container finished" podID="ed8cab85-cee8-4604-9898-9215c05dbe9d" containerID="fcb4b482e76a139224c75dd763c6b8737558702df24d0451a0c3c91427c4ae02" exitCode=0 Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.405963 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" event={"ID":"ed8cab85-cee8-4604-9898-9215c05dbe9d","Type":"ContainerDied","Data":"fcb4b482e76a139224c75dd763c6b8737558702df24d0451a0c3c91427c4ae02"} Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.407041 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa35605b-304a-4198-84a9-17805d84a349","Type":"ContainerStarted","Data":"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337"} Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.407058 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa35605b-304a-4198-84a9-17805d84a349","Type":"ContainerStarted","Data":"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a"} Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.409775 4889 generic.go:334] "Generic (PLEG): container finished" podID="c93fa965-a510-4c13-b946-51150bd493e1" containerID="d8b7dcde5ba3efb58f541995114142c3cd0d5e253a1a00051e1b96d3c29ecbaa" exitCode=0 Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.410235 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pnfxg" event={"ID":"c93fa965-a510-4c13-b946-51150bd493e1","Type":"ContainerDied","Data":"d8b7dcde5ba3efb58f541995114142c3cd0d5e253a1a00051e1b96d3c29ecbaa"} Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.434545 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.434527561 podStartE2EDuration="2.434527561s" podCreationTimestamp="2025-11-28 07:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:02.427160744 +0000 UTC m=+1325.397394899" watchObservedRunningTime="2025-11-28 07:10:02.434527561 +0000 UTC m=+1325.404761716" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.462288 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.692624 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.796166 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-svc\") pod \"ed8cab85-cee8-4604-9898-9215c05dbe9d\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.796679 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-nb\") pod \"ed8cab85-cee8-4604-9898-9215c05dbe9d\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.796801 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-swift-storage-0\") pod \"ed8cab85-cee8-4604-9898-9215c05dbe9d\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.796841 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brwz9\" (UniqueName: \"kubernetes.io/projected/ed8cab85-cee8-4604-9898-9215c05dbe9d-kube-api-access-brwz9\") pod \"ed8cab85-cee8-4604-9898-9215c05dbe9d\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.796866 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-config\") pod \"ed8cab85-cee8-4604-9898-9215c05dbe9d\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.797571 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-sb\") pod \"ed8cab85-cee8-4604-9898-9215c05dbe9d\" (UID: \"ed8cab85-cee8-4604-9898-9215c05dbe9d\") " Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.818063 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8cab85-cee8-4604-9898-9215c05dbe9d-kube-api-access-brwz9" (OuterVolumeSpecName: "kube-api-access-brwz9") pod "ed8cab85-cee8-4604-9898-9215c05dbe9d" (UID: "ed8cab85-cee8-4604-9898-9215c05dbe9d"). InnerVolumeSpecName "kube-api-access-brwz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.846000 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed8cab85-cee8-4604-9898-9215c05dbe9d" (UID: "ed8cab85-cee8-4604-9898-9215c05dbe9d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.847878 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed8cab85-cee8-4604-9898-9215c05dbe9d" (UID: "ed8cab85-cee8-4604-9898-9215c05dbe9d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.853675 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed8cab85-cee8-4604-9898-9215c05dbe9d" (UID: "ed8cab85-cee8-4604-9898-9215c05dbe9d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.859937 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed8cab85-cee8-4604-9898-9215c05dbe9d" (UID: "ed8cab85-cee8-4604-9898-9215c05dbe9d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.889300 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-config" (OuterVolumeSpecName: "config") pod "ed8cab85-cee8-4604-9898-9215c05dbe9d" (UID: "ed8cab85-cee8-4604-9898-9215c05dbe9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.901082 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.901111 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.901121 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brwz9\" (UniqueName: \"kubernetes.io/projected/ed8cab85-cee8-4604-9898-9215c05dbe9d-kube-api-access-brwz9\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.901132 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.901140 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:02 crc kubenswrapper[4889]: I1128 07:10:02.901150 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed8cab85-cee8-4604-9898-9215c05dbe9d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.026934 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.181:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.026933 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.181:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.419003 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" event={"ID":"ed8cab85-cee8-4604-9898-9215c05dbe9d","Type":"ContainerDied","Data":"c9a9e8c7fc53625804f41ebf30a00a1f5643bb7fed306837c240397e330dfce8"} Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.419050 4889 scope.go:117] "RemoveContainer" containerID="fcb4b482e76a139224c75dd763c6b8737558702df24d0451a0c3c91427c4ae02" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.420284 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8ccb5c7cf-k9n2l" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.425391 4889 generic.go:334] "Generic (PLEG): container finished" podID="da834219-0eb6-44f4-9e57-81a4ef2c201c" containerID="e5eff60d8d8d77100dc3390741635f318a7f480049c275a81964aa7dfa36c631" exitCode=0 Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.426685 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rkslq" event={"ID":"da834219-0eb6-44f4-9e57-81a4ef2c201c","Type":"ContainerDied","Data":"e5eff60d8d8d77100dc3390741635f318a7f480049c275a81964aa7dfa36c631"} Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.436511 4889 scope.go:117] "RemoveContainer" containerID="f324159977992549c1956657bb766a213267774090df227c5602d57d1df7efca" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.485776 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8ccb5c7cf-k9n2l"] Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.493445 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8ccb5c7cf-k9n2l"] Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.774153 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.817475 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-combined-ca-bundle\") pod \"c93fa965-a510-4c13-b946-51150bd493e1\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.819400 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-scripts\") pod \"c93fa965-a510-4c13-b946-51150bd493e1\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.819580 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knqb7\" (UniqueName: \"kubernetes.io/projected/c93fa965-a510-4c13-b946-51150bd493e1-kube-api-access-knqb7\") pod \"c93fa965-a510-4c13-b946-51150bd493e1\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.820009 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-config-data\") pod \"c93fa965-a510-4c13-b946-51150bd493e1\" (UID: \"c93fa965-a510-4c13-b946-51150bd493e1\") " Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.823334 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93fa965-a510-4c13-b946-51150bd493e1-kube-api-access-knqb7" (OuterVolumeSpecName: "kube-api-access-knqb7") pod "c93fa965-a510-4c13-b946-51150bd493e1" (UID: "c93fa965-a510-4c13-b946-51150bd493e1"). InnerVolumeSpecName "kube-api-access-knqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.826620 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-scripts" (OuterVolumeSpecName: "scripts") pod "c93fa965-a510-4c13-b946-51150bd493e1" (UID: "c93fa965-a510-4c13-b946-51150bd493e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.855847 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-config-data" (OuterVolumeSpecName: "config-data") pod "c93fa965-a510-4c13-b946-51150bd493e1" (UID: "c93fa965-a510-4c13-b946-51150bd493e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.862899 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c93fa965-a510-4c13-b946-51150bd493e1" (UID: "c93fa965-a510-4c13-b946-51150bd493e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.922254 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.922290 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.922301 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knqb7\" (UniqueName: \"kubernetes.io/projected/c93fa965-a510-4c13-b946-51150bd493e1-kube-api-access-knqb7\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:03 crc kubenswrapper[4889]: I1128 07:10:03.922311 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c93fa965-a510-4c13-b946-51150bd493e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.437155 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pnfxg" Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.444415 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pnfxg" event={"ID":"c93fa965-a510-4c13-b946-51150bd493e1","Type":"ContainerDied","Data":"f38336abec4fb1c6f0635db7883b9d90129866161d63755f0314a2af58139018"} Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.444467 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f38336abec4fb1c6f0635db7883b9d90129866161d63755f0314a2af58139018" Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.636758 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.637006 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-log" containerID="cri-o://f74379e32f1e90e6851c1cc759b79adaa7b86d8d5bc59b8c75de94d7ca32b4cf" gracePeriod=30 Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.637133 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-api" containerID="cri-o://44b7db3182088e18f98c4148c5c8466a90e1d81667a7e51d563183b230863828" gracePeriod=30 Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.664117 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.731474 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.731663 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aa35605b-304a-4198-84a9-17805d84a349" containerName="nova-metadata-log" containerID="cri-o://ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a" gracePeriod=30 Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.732111 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aa35605b-304a-4198-84a9-17805d84a349" containerName="nova-metadata-metadata" containerID="cri-o://8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337" gracePeriod=30 Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.829183 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.940204 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl4xw\" (UniqueName: \"kubernetes.io/projected/da834219-0eb6-44f4-9e57-81a4ef2c201c-kube-api-access-jl4xw\") pod \"da834219-0eb6-44f4-9e57-81a4ef2c201c\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.940316 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-scripts\") pod \"da834219-0eb6-44f4-9e57-81a4ef2c201c\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.940410 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-combined-ca-bundle\") pod \"da834219-0eb6-44f4-9e57-81a4ef2c201c\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.940571 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-config-data\") pod \"da834219-0eb6-44f4-9e57-81a4ef2c201c\" (UID: \"da834219-0eb6-44f4-9e57-81a4ef2c201c\") " Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.944855 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da834219-0eb6-44f4-9e57-81a4ef2c201c-kube-api-access-jl4xw" (OuterVolumeSpecName: "kube-api-access-jl4xw") pod "da834219-0eb6-44f4-9e57-81a4ef2c201c" (UID: "da834219-0eb6-44f4-9e57-81a4ef2c201c"). InnerVolumeSpecName "kube-api-access-jl4xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.944914 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-scripts" (OuterVolumeSpecName: "scripts") pod "da834219-0eb6-44f4-9e57-81a4ef2c201c" (UID: "da834219-0eb6-44f4-9e57-81a4ef2c201c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.966047 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-config-data" (OuterVolumeSpecName: "config-data") pod "da834219-0eb6-44f4-9e57-81a4ef2c201c" (UID: "da834219-0eb6-44f4-9e57-81a4ef2c201c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:04 crc kubenswrapper[4889]: I1128 07:10:04.968844 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da834219-0eb6-44f4-9e57-81a4ef2c201c" (UID: "da834219-0eb6-44f4-9e57-81a4ef2c201c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.042846 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.042876 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl4xw\" (UniqueName: \"kubernetes.io/projected/da834219-0eb6-44f4-9e57-81a4ef2c201c-kube-api-access-jl4xw\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.042886 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.042896 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da834219-0eb6-44f4-9e57-81a4ef2c201c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.344216 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8cab85-cee8-4604-9898-9215c05dbe9d" path="/var/lib/kubelet/pods/ed8cab85-cee8-4604-9898-9215c05dbe9d/volumes" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.355950 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.451060 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-nova-metadata-tls-certs\") pod \"aa35605b-304a-4198-84a9-17805d84a349\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.451225 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxxbr\" (UniqueName: \"kubernetes.io/projected/aa35605b-304a-4198-84a9-17805d84a349-kube-api-access-dxxbr\") pod \"aa35605b-304a-4198-84a9-17805d84a349\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.451347 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa35605b-304a-4198-84a9-17805d84a349-logs\") pod \"aa35605b-304a-4198-84a9-17805d84a349\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.451413 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-config-data\") pod \"aa35605b-304a-4198-84a9-17805d84a349\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.451448 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-combined-ca-bundle\") pod \"aa35605b-304a-4198-84a9-17805d84a349\" (UID: \"aa35605b-304a-4198-84a9-17805d84a349\") " Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.454886 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa35605b-304a-4198-84a9-17805d84a349-logs" (OuterVolumeSpecName: "logs") pod "aa35605b-304a-4198-84a9-17805d84a349" (UID: "aa35605b-304a-4198-84a9-17805d84a349"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.456842 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa35605b-304a-4198-84a9-17805d84a349-kube-api-access-dxxbr" (OuterVolumeSpecName: "kube-api-access-dxxbr") pod "aa35605b-304a-4198-84a9-17805d84a349" (UID: "aa35605b-304a-4198-84a9-17805d84a349"). InnerVolumeSpecName "kube-api-access-dxxbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.457133 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa35605b-304a-4198-84a9-17805d84a349-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.457159 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxxbr\" (UniqueName: \"kubernetes.io/projected/aa35605b-304a-4198-84a9-17805d84a349-kube-api-access-dxxbr\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.457686 4889 generic.go:334] "Generic (PLEG): container finished" podID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerID="f74379e32f1e90e6851c1cc759b79adaa7b86d8d5bc59b8c75de94d7ca32b4cf" exitCode=143 Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.457786 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75bac447-8979-45d5-a2fd-e22d83e1b001","Type":"ContainerDied","Data":"f74379e32f1e90e6851c1cc759b79adaa7b86d8d5bc59b8c75de94d7ca32b4cf"} Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.463246 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rkslq" event={"ID":"da834219-0eb6-44f4-9e57-81a4ef2c201c","Type":"ContainerDied","Data":"b2306de6732c7a22cdf3f503795e3846a32f7b4c3ee5a991e166c3b8514dbf7f"} Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.463293 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2306de6732c7a22cdf3f503795e3846a32f7b4c3ee5a991e166c3b8514dbf7f" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.463376 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rkslq" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.468243 4889 generic.go:334] "Generic (PLEG): container finished" podID="aa35605b-304a-4198-84a9-17805d84a349" containerID="8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337" exitCode=0 Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.468267 4889 generic.go:334] "Generic (PLEG): container finished" podID="aa35605b-304a-4198-84a9-17805d84a349" containerID="ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a" exitCode=143 Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.468294 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa35605b-304a-4198-84a9-17805d84a349","Type":"ContainerDied","Data":"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337"} Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.468315 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa35605b-304a-4198-84a9-17805d84a349","Type":"ContainerDied","Data":"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a"} Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.468325 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa35605b-304a-4198-84a9-17805d84a349","Type":"ContainerDied","Data":"e320775ac2e31670491a053a67626fc534d5011603a4e96e0753059a9150f0e7"} Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.468340 4889 scope.go:117] "RemoveContainer" containerID="8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.468597 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.469815 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0dffae5b-ddfc-4c8a-81d5-832bf584f779" containerName="nova-scheduler-scheduler" containerID="cri-o://116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4" gracePeriod=30 Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.484428 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-config-data" (OuterVolumeSpecName: "config-data") pod "aa35605b-304a-4198-84a9-17805d84a349" (UID: "aa35605b-304a-4198-84a9-17805d84a349"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.510825 4889 scope.go:117] "RemoveContainer" containerID="ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.517891 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa35605b-304a-4198-84a9-17805d84a349" (UID: "aa35605b-304a-4198-84a9-17805d84a349"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.521888 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "aa35605b-304a-4198-84a9-17805d84a349" (UID: "aa35605b-304a-4198-84a9-17805d84a349"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.529113 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:10:05 crc kubenswrapper[4889]: E1128 07:10:05.529632 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da834219-0eb6-44f4-9e57-81a4ef2c201c" containerName="nova-cell1-conductor-db-sync" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.529654 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="da834219-0eb6-44f4-9e57-81a4ef2c201c" containerName="nova-cell1-conductor-db-sync" Nov 28 07:10:05 crc kubenswrapper[4889]: E1128 07:10:05.529668 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93fa965-a510-4c13-b946-51150bd493e1" containerName="nova-manage" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.529675 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93fa965-a510-4c13-b946-51150bd493e1" containerName="nova-manage" Nov 28 07:10:05 crc kubenswrapper[4889]: E1128 07:10:05.529691 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8cab85-cee8-4604-9898-9215c05dbe9d" containerName="dnsmasq-dns" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.529697 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8cab85-cee8-4604-9898-9215c05dbe9d" containerName="dnsmasq-dns" Nov 28 07:10:05 crc kubenswrapper[4889]: E1128 07:10:05.529776 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa35605b-304a-4198-84a9-17805d84a349" containerName="nova-metadata-log" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.529784 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa35605b-304a-4198-84a9-17805d84a349" containerName="nova-metadata-log" Nov 28 07:10:05 crc kubenswrapper[4889]: E1128 07:10:05.529801 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa35605b-304a-4198-84a9-17805d84a349" containerName="nova-metadata-metadata" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.529807 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa35605b-304a-4198-84a9-17805d84a349" containerName="nova-metadata-metadata" Nov 28 07:10:05 crc kubenswrapper[4889]: E1128 07:10:05.529820 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8cab85-cee8-4604-9898-9215c05dbe9d" containerName="init" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.529825 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8cab85-cee8-4604-9898-9215c05dbe9d" containerName="init" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.530000 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa35605b-304a-4198-84a9-17805d84a349" containerName="nova-metadata-log" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.530012 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="da834219-0eb6-44f4-9e57-81a4ef2c201c" containerName="nova-cell1-conductor-db-sync" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.530027 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8cab85-cee8-4604-9898-9215c05dbe9d" containerName="dnsmasq-dns" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.530049 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c93fa965-a510-4c13-b946-51150bd493e1" containerName="nova-manage" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.530061 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa35605b-304a-4198-84a9-17805d84a349" containerName="nova-metadata-metadata" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.530794 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.533786 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.536528 4889 scope.go:117] "RemoveContainer" containerID="8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337" Nov 28 07:10:05 crc kubenswrapper[4889]: E1128 07:10:05.537635 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337\": container with ID starting with 8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337 not found: ID does not exist" containerID="8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.537668 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337"} err="failed to get container status \"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337\": rpc error: code = NotFound desc = could not find container \"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337\": container with ID starting with 8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337 not found: ID does not exist" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.537740 4889 scope.go:117] "RemoveContainer" containerID="ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a" Nov 28 07:10:05 crc kubenswrapper[4889]: E1128 07:10:05.537962 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a\": container with ID starting with ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a not found: ID does not exist" containerID="ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.537983 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a"} err="failed to get container status \"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a\": rpc error: code = NotFound desc = could not find container \"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a\": container with ID starting with ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a not found: ID does not exist" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.537996 4889 scope.go:117] "RemoveContainer" containerID="8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.538415 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337"} err="failed to get container status \"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337\": rpc error: code = NotFound desc = could not find container \"8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337\": container with ID starting with 8767c98bfffd5a0a6e02fd2248cd7c46a3e83f30eac916decf3d2e58483f3337 not found: ID does not exist" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.538456 4889 scope.go:117] "RemoveContainer" containerID="ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.543623 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a"} err="failed to get container status \"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a\": rpc error: code = NotFound desc = could not find container \"ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a\": container with ID starting with ce01534fff586a857f1954944ba103b66279fe31bcc43e8e60a7b8db11bc7e5a not found: ID does not exist" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.553610 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.558232 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.558294 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.558331 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhxsm\" (UniqueName: \"kubernetes.io/projected/bf4ff6f2-105e-4f62-be58-3054d0a54fed-kube-api-access-xhxsm\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.558413 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.558425 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.558435 4889 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa35605b-304a-4198-84a9-17805d84a349-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.659694 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.659790 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.659830 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhxsm\" (UniqueName: \"kubernetes.io/projected/bf4ff6f2-105e-4f62-be58-3054d0a54fed-kube-api-access-xhxsm\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.666123 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.666811 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.676015 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhxsm\" (UniqueName: \"kubernetes.io/projected/bf4ff6f2-105e-4f62-be58-3054d0a54fed-kube-api-access-xhxsm\") pod \"nova-cell1-conductor-0\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.736697 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.836118 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.848977 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.858391 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.862137 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.864016 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.866736 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.867279 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.874773 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.964491 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.964538 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcq9b\" (UniqueName: \"kubernetes.io/projected/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-kube-api-access-hcq9b\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.964574 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-config-data\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.964765 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:05 crc kubenswrapper[4889]: I1128 07:10:05.964792 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-logs\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.067839 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.067910 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-logs\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.067968 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.067995 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcq9b\" (UniqueName: \"kubernetes.io/projected/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-kube-api-access-hcq9b\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.068034 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-config-data\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.069059 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-logs\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.076368 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.076446 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.076496 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-config-data\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.088171 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcq9b\" (UniqueName: \"kubernetes.io/projected/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-kube-api-access-hcq9b\") pod \"nova-metadata-0\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.283434 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.339500 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:10:06 crc kubenswrapper[4889]: W1128 07:10:06.341758 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf4ff6f2_105e_4f62_be58_3054d0a54fed.slice/crio-821eeeef45fe1a309d05a04978883f02892218727343f733ec9946f42d7cd928 WatchSource:0}: Error finding container 821eeeef45fe1a309d05a04978883f02892218727343f733ec9946f42d7cd928: Status 404 returned error can't find the container with id 821eeeef45fe1a309d05a04978883f02892218727343f733ec9946f42d7cd928 Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.476628 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bf4ff6f2-105e-4f62-be58-3054d0a54fed","Type":"ContainerStarted","Data":"821eeeef45fe1a309d05a04978883f02892218727343f733ec9946f42d7cd928"} Nov 28 07:10:06 crc kubenswrapper[4889]: I1128 07:10:06.713202 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:06 crc kubenswrapper[4889]: E1128 07:10:06.981747 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:10:06 crc kubenswrapper[4889]: E1128 07:10:06.983121 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:10:06 crc kubenswrapper[4889]: E1128 07:10:06.986245 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:10:06 crc kubenswrapper[4889]: E1128 07:10:06.986293 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0dffae5b-ddfc-4c8a-81d5-832bf584f779" containerName="nova-scheduler-scheduler" Nov 28 07:10:07 crc kubenswrapper[4889]: I1128 07:10:07.358730 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa35605b-304a-4198-84a9-17805d84a349" path="/var/lib/kubelet/pods/aa35605b-304a-4198-84a9-17805d84a349/volumes" Nov 28 07:10:07 crc kubenswrapper[4889]: I1128 07:10:07.491597 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec","Type":"ContainerStarted","Data":"1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26"} Nov 28 07:10:07 crc kubenswrapper[4889]: I1128 07:10:07.491922 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec","Type":"ContainerStarted","Data":"9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0"} Nov 28 07:10:07 crc kubenswrapper[4889]: I1128 07:10:07.491940 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec","Type":"ContainerStarted","Data":"d3934a11d20b8be1f0231494bf8f973ae2fb0457e5875b21cc4a8c34d09c8a42"} Nov 28 07:10:07 crc kubenswrapper[4889]: I1128 07:10:07.506445 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bf4ff6f2-105e-4f62-be58-3054d0a54fed","Type":"ContainerStarted","Data":"cb72c62f8cc63262a8e708afde0ce2707f137cbcd34fac8af65e4f38de5d9324"} Nov 28 07:10:07 crc kubenswrapper[4889]: I1128 07:10:07.507296 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:07 crc kubenswrapper[4889]: I1128 07:10:07.540116 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.540089345 podStartE2EDuration="2.540089345s" podCreationTimestamp="2025-11-28 07:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:07.523193078 +0000 UTC m=+1330.493427233" watchObservedRunningTime="2025-11-28 07:10:07.540089345 +0000 UTC m=+1330.510323510" Nov 28 07:10:07 crc kubenswrapper[4889]: I1128 07:10:07.558236 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.558213771 podStartE2EDuration="2.558213771s" podCreationTimestamp="2025-11-28 07:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:07.549129922 +0000 UTC m=+1330.519364077" watchObservedRunningTime="2025-11-28 07:10:07.558213771 +0000 UTC m=+1330.528447926" Nov 28 07:10:08 crc kubenswrapper[4889]: E1128 07:10:08.266306 4889 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75bac447_8979_45d5_a2fd_e22d83e1b001.slice/crio-44b7db3182088e18f98c4148c5c8466a90e1d81667a7e51d563183b230863828.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75bac447_8979_45d5_a2fd_e22d83e1b001.slice/crio-conmon-44b7db3182088e18f98c4148c5c8466a90e1d81667a7e51d563183b230863828.scope\": RecentStats: unable to find data in memory cache]" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.519077 4889 generic.go:334] "Generic (PLEG): container finished" podID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerID="44b7db3182088e18f98c4148c5c8466a90e1d81667a7e51d563183b230863828" exitCode=0 Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.520632 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75bac447-8979-45d5-a2fd-e22d83e1b001","Type":"ContainerDied","Data":"44b7db3182088e18f98c4148c5c8466a90e1d81667a7e51d563183b230863828"} Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.520662 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75bac447-8979-45d5-a2fd-e22d83e1b001","Type":"ContainerDied","Data":"fe30a95beb2bdad42f4d5724ea24a45ebbf30460b460032c2ff94362f5479347"} Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.520674 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe30a95beb2bdad42f4d5724ea24a45ebbf30460b460032c2ff94362f5479347" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.521245 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.643619 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-config-data\") pod \"75bac447-8979-45d5-a2fd-e22d83e1b001\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.643722 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bac447-8979-45d5-a2fd-e22d83e1b001-logs\") pod \"75bac447-8979-45d5-a2fd-e22d83e1b001\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.643830 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zmw7\" (UniqueName: \"kubernetes.io/projected/75bac447-8979-45d5-a2fd-e22d83e1b001-kube-api-access-7zmw7\") pod \"75bac447-8979-45d5-a2fd-e22d83e1b001\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.643953 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-combined-ca-bundle\") pod \"75bac447-8979-45d5-a2fd-e22d83e1b001\" (UID: \"75bac447-8979-45d5-a2fd-e22d83e1b001\") " Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.645734 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75bac447-8979-45d5-a2fd-e22d83e1b001-logs" (OuterVolumeSpecName: "logs") pod "75bac447-8979-45d5-a2fd-e22d83e1b001" (UID: "75bac447-8979-45d5-a2fd-e22d83e1b001"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.678868 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bac447-8979-45d5-a2fd-e22d83e1b001-kube-api-access-7zmw7" (OuterVolumeSpecName: "kube-api-access-7zmw7") pod "75bac447-8979-45d5-a2fd-e22d83e1b001" (UID: "75bac447-8979-45d5-a2fd-e22d83e1b001"). InnerVolumeSpecName "kube-api-access-7zmw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.691116 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-config-data" (OuterVolumeSpecName: "config-data") pod "75bac447-8979-45d5-a2fd-e22d83e1b001" (UID: "75bac447-8979-45d5-a2fd-e22d83e1b001"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.697222 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75bac447-8979-45d5-a2fd-e22d83e1b001" (UID: "75bac447-8979-45d5-a2fd-e22d83e1b001"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.746698 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.746748 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bac447-8979-45d5-a2fd-e22d83e1b001-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.746759 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zmw7\" (UniqueName: \"kubernetes.io/projected/75bac447-8979-45d5-a2fd-e22d83e1b001-kube-api-access-7zmw7\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:08 crc kubenswrapper[4889]: I1128 07:10:08.746771 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bac447-8979-45d5-a2fd-e22d83e1b001-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.341584 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.370002 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.370236 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9b3de373-d67f-4cc7-ac6b-43b4b3f94242" containerName="kube-state-metrics" containerID="cri-o://710a20a8d6ed3e17de97850bc314e869f452085a8f28180ad7b708972e3860d5" gracePeriod=30 Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.464949 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-combined-ca-bundle\") pod \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.465173 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-config-data\") pod \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.465204 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9z9c\" (UniqueName: \"kubernetes.io/projected/0dffae5b-ddfc-4c8a-81d5-832bf584f779-kube-api-access-k9z9c\") pod \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\" (UID: \"0dffae5b-ddfc-4c8a-81d5-832bf584f779\") " Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.475121 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dffae5b-ddfc-4c8a-81d5-832bf584f779-kube-api-access-k9z9c" (OuterVolumeSpecName: "kube-api-access-k9z9c") pod "0dffae5b-ddfc-4c8a-81d5-832bf584f779" (UID: "0dffae5b-ddfc-4c8a-81d5-832bf584f779"). InnerVolumeSpecName "kube-api-access-k9z9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.494328 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dffae5b-ddfc-4c8a-81d5-832bf584f779" (UID: "0dffae5b-ddfc-4c8a-81d5-832bf584f779"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.497506 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-config-data" (OuterVolumeSpecName: "config-data") pod "0dffae5b-ddfc-4c8a-81d5-832bf584f779" (UID: "0dffae5b-ddfc-4c8a-81d5-832bf584f779"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.542507 4889 generic.go:334] "Generic (PLEG): container finished" podID="9b3de373-d67f-4cc7-ac6b-43b4b3f94242" containerID="710a20a8d6ed3e17de97850bc314e869f452085a8f28180ad7b708972e3860d5" exitCode=2 Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.542595 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9b3de373-d67f-4cc7-ac6b-43b4b3f94242","Type":"ContainerDied","Data":"710a20a8d6ed3e17de97850bc314e869f452085a8f28180ad7b708972e3860d5"} Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.544602 4889 generic.go:334] "Generic (PLEG): container finished" podID="0dffae5b-ddfc-4c8a-81d5-832bf584f779" containerID="116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4" exitCode=0 Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.544683 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.545757 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.546588 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dffae5b-ddfc-4c8a-81d5-832bf584f779","Type":"ContainerDied","Data":"116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4"} Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.546620 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dffae5b-ddfc-4c8a-81d5-832bf584f779","Type":"ContainerDied","Data":"71de98606b0e3ba9d559ce51c8bc35d971af9da1a5c3b109580660077663b646"} Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.546640 4889 scope.go:117] "RemoveContainer" containerID="116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.567590 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.567625 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9z9c\" (UniqueName: \"kubernetes.io/projected/0dffae5b-ddfc-4c8a-81d5-832bf584f779-kube-api-access-k9z9c\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.567635 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dffae5b-ddfc-4c8a-81d5-832bf584f779-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.576683 4889 scope.go:117] "RemoveContainer" containerID="116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4" Nov 28 07:10:09 crc kubenswrapper[4889]: E1128 07:10:09.581451 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4\": container with ID starting with 116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4 not found: ID does not exist" containerID="116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.581514 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4"} err="failed to get container status \"116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4\": rpc error: code = NotFound desc = could not find container \"116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4\": container with ID starting with 116500a8e4c660f8387ef5a2c7dd0dcda2e0faa67adff2e2cf01ac96257cabc4 not found: ID does not exist" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.595557 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.644002 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.651184 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.658746 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: E1128 07:10:09.659194 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-api" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.659217 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-api" Nov 28 07:10:09 crc kubenswrapper[4889]: E1128 07:10:09.659231 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dffae5b-ddfc-4c8a-81d5-832bf584f779" containerName="nova-scheduler-scheduler" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.659238 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dffae5b-ddfc-4c8a-81d5-832bf584f779" containerName="nova-scheduler-scheduler" Nov 28 07:10:09 crc kubenswrapper[4889]: E1128 07:10:09.659276 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-log" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.659282 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-log" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.659494 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-api" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.659515 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" containerName="nova-api-log" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.659540 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dffae5b-ddfc-4c8a-81d5-832bf584f779" containerName="nova-scheduler-scheduler" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.661615 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.663672 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.671939 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.683607 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.685020 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.687202 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.692042 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.704035 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.770900 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.771129 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-config-data\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.771202 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkxhr\" (UniqueName: \"kubernetes.io/projected/ba193675-44c4-4a85-b16d-66e0a5102004-kube-api-access-zkxhr\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.771281 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.771359 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpbg6\" (UniqueName: \"kubernetes.io/projected/669135df-f9c4-4aab-803b-a1732d33fd42-kube-api-access-lpbg6\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.771546 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba193675-44c4-4a85-b16d-66e0a5102004-logs\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.771595 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-config-data\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.777778 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.873144 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scglv\" (UniqueName: \"kubernetes.io/projected/9b3de373-d67f-4cc7-ac6b-43b4b3f94242-kube-api-access-scglv\") pod \"9b3de373-d67f-4cc7-ac6b-43b4b3f94242\" (UID: \"9b3de373-d67f-4cc7-ac6b-43b4b3f94242\") " Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.873498 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba193675-44c4-4a85-b16d-66e0a5102004-logs\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.873533 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-config-data\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.873603 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.873722 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-config-data\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.873754 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkxhr\" (UniqueName: \"kubernetes.io/projected/ba193675-44c4-4a85-b16d-66e0a5102004-kube-api-access-zkxhr\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.873782 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.873807 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpbg6\" (UniqueName: \"kubernetes.io/projected/669135df-f9c4-4aab-803b-a1732d33fd42-kube-api-access-lpbg6\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.874526 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba193675-44c4-4a85-b16d-66e0a5102004-logs\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.877347 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3de373-d67f-4cc7-ac6b-43b4b3f94242-kube-api-access-scglv" (OuterVolumeSpecName: "kube-api-access-scglv") pod "9b3de373-d67f-4cc7-ac6b-43b4b3f94242" (UID: "9b3de373-d67f-4cc7-ac6b-43b4b3f94242"). InnerVolumeSpecName "kube-api-access-scglv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.878324 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-config-data\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.880036 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-config-data\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.880292 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.881552 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.892856 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpbg6\" (UniqueName: \"kubernetes.io/projected/669135df-f9c4-4aab-803b-a1732d33fd42-kube-api-access-lpbg6\") pod \"nova-scheduler-0\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.895575 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkxhr\" (UniqueName: \"kubernetes.io/projected/ba193675-44c4-4a85-b16d-66e0a5102004-kube-api-access-zkxhr\") pod \"nova-api-0\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " pod="openstack/nova-api-0" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.975192 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scglv\" (UniqueName: \"kubernetes.io/projected/9b3de373-d67f-4cc7-ac6b-43b4b3f94242-kube-api-access-scglv\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:09 crc kubenswrapper[4889]: I1128 07:10:09.983499 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.010204 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.487223 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.548754 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.572024 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9b3de373-d67f-4cc7-ac6b-43b4b3f94242","Type":"ContainerDied","Data":"e04d5bc5f0442635252a97bc3e08c45cd5a6a202b5ac5032984894eb9d8a279d"} Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.572065 4889 scope.go:117] "RemoveContainer" containerID="710a20a8d6ed3e17de97850bc314e869f452085a8f28180ad7b708972e3860d5" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.572154 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.582599 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba193675-44c4-4a85-b16d-66e0a5102004","Type":"ContainerStarted","Data":"e05a8f56feadc5b238d2570f0ed73124bb91ac17253224357cf423aa2ef57f6d"} Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.699839 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.712022 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.719125 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:10:10 crc kubenswrapper[4889]: E1128 07:10:10.719673 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3de373-d67f-4cc7-ac6b-43b4b3f94242" containerName="kube-state-metrics" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.719719 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3de373-d67f-4cc7-ac6b-43b4b3f94242" containerName="kube-state-metrics" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.719971 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3de373-d67f-4cc7-ac6b-43b4b3f94242" containerName="kube-state-metrics" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.720827 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.724139 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.724391 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.726637 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.798859 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.798936 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqw6\" (UniqueName: \"kubernetes.io/projected/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-api-access-brqw6\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.798997 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.799066 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.903685 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.903753 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brqw6\" (UniqueName: \"kubernetes.io/projected/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-api-access-brqw6\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.903816 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.903887 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.916468 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.922312 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.926472 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:10 crc kubenswrapper[4889]: I1128 07:10:10.930288 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brqw6\" (UniqueName: \"kubernetes.io/projected/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-api-access-brqw6\") pod \"kube-state-metrics-0\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " pod="openstack/kube-state-metrics-0" Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.052487 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.284759 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.285086 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.343429 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dffae5b-ddfc-4c8a-81d5-832bf584f779" path="/var/lib/kubelet/pods/0dffae5b-ddfc-4c8a-81d5-832bf584f779/volumes" Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.344139 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75bac447-8979-45d5-a2fd-e22d83e1b001" path="/var/lib/kubelet/pods/75bac447-8979-45d5-a2fd-e22d83e1b001/volumes" Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.348075 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3de373-d67f-4cc7-ac6b-43b4b3f94242" path="/var/lib/kubelet/pods/9b3de373-d67f-4cc7-ac6b-43b4b3f94242/volumes" Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.355306 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.355573 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="ceilometer-central-agent" containerID="cri-o://842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531" gracePeriod=30 Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.355635 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="proxy-httpd" containerID="cri-o://2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20" gracePeriod=30 Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.355679 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="ceilometer-notification-agent" containerID="cri-o://efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9" gracePeriod=30 Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.355635 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="sg-core" containerID="cri-o://41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca" gracePeriod=30 Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.525770 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.593097 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"669135df-f9c4-4aab-803b-a1732d33fd42","Type":"ContainerStarted","Data":"e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1"} Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.593171 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"669135df-f9c4-4aab-803b-a1732d33fd42","Type":"ContainerStarted","Data":"76d63bf064f064b8b3bf20a779d7815de763eb9de1c953b60a9f77a86cf26a35"} Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.597499 4889 generic.go:334] "Generic (PLEG): container finished" podID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerID="2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20" exitCode=0 Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.597617 4889 generic.go:334] "Generic (PLEG): container finished" podID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerID="41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca" exitCode=2 Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.597725 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerDied","Data":"2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20"} Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.597843 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerDied","Data":"41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca"} Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.599517 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba193675-44c4-4a85-b16d-66e0a5102004","Type":"ContainerStarted","Data":"5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d"} Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.599619 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba193675-44c4-4a85-b16d-66e0a5102004","Type":"ContainerStarted","Data":"4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd"} Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.602070 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9aacedc-5e53-4c26-8ded-2af578a7de41","Type":"ContainerStarted","Data":"35cf157289eb6462ec06219ddc15c2733a617de52d034292bef59910991ae297"} Nov 28 07:10:11 crc kubenswrapper[4889]: I1128 07:10:11.617526 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.617509076 podStartE2EDuration="2.617509076s" podCreationTimestamp="2025-11-28 07:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:11.612935136 +0000 UTC m=+1334.583169291" watchObservedRunningTime="2025-11-28 07:10:11.617509076 +0000 UTC m=+1334.587743231" Nov 28 07:10:12 crc kubenswrapper[4889]: I1128 07:10:12.612379 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9aacedc-5e53-4c26-8ded-2af578a7de41","Type":"ContainerStarted","Data":"cfce3bc5d6f0828a73170fd49a5f64b6f79394b204fb2e4a2576389017af7153"} Nov 28 07:10:12 crc kubenswrapper[4889]: I1128 07:10:12.613123 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 07:10:12 crc kubenswrapper[4889]: I1128 07:10:12.615549 4889 generic.go:334] "Generic (PLEG): container finished" podID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerID="842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531" exitCode=0 Nov 28 07:10:12 crc kubenswrapper[4889]: I1128 07:10:12.615671 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerDied","Data":"842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531"} Nov 28 07:10:12 crc kubenswrapper[4889]: I1128 07:10:12.635500 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.635479073 podStartE2EDuration="3.635479073s" podCreationTimestamp="2025-11-28 07:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:11.63304507 +0000 UTC m=+1334.603279235" watchObservedRunningTime="2025-11-28 07:10:12.635479073 +0000 UTC m=+1335.605713248" Nov 28 07:10:12 crc kubenswrapper[4889]: I1128 07:10:12.638433 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.2866754289999998 podStartE2EDuration="2.638418084s" podCreationTimestamp="2025-11-28 07:10:10 +0000 UTC" firstStartedPulling="2025-11-28 07:10:11.530853871 +0000 UTC m=+1334.501088026" lastFinishedPulling="2025-11-28 07:10:11.882596526 +0000 UTC m=+1334.852830681" observedRunningTime="2025-11-28 07:10:12.631051626 +0000 UTC m=+1335.601285781" watchObservedRunningTime="2025-11-28 07:10:12.638418084 +0000 UTC m=+1335.608652249" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.526434 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.641827 4889 generic.go:334] "Generic (PLEG): container finished" podID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerID="efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9" exitCode=0 Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.641876 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerDied","Data":"efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9"} Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.641884 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.641905 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f24e5a27-ec6a-4856-b8b4-3aa733f866fb","Type":"ContainerDied","Data":"763282dec2e397cfe16354e2d6f41c14d40d18307eef9d362a602c5d1f4654f8"} Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.641926 4889 scope.go:117] "RemoveContainer" containerID="2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.668863 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-run-httpd\") pod \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.668984 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-config-data\") pod \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.669318 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f24e5a27-ec6a-4856-b8b4-3aa733f866fb" (UID: "f24e5a27-ec6a-4856-b8b4-3aa733f866fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.669437 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-sg-core-conf-yaml\") pod \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.669493 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-scripts\") pod \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.669548 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-combined-ca-bundle\") pod \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.669618 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg2xw\" (UniqueName: \"kubernetes.io/projected/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-kube-api-access-cg2xw\") pod \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.669640 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-log-httpd\") pod \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\" (UID: \"f24e5a27-ec6a-4856-b8b4-3aa733f866fb\") " Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.670630 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f24e5a27-ec6a-4856-b8b4-3aa733f866fb" (UID: "f24e5a27-ec6a-4856-b8b4-3aa733f866fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.671461 4889 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.671566 4889 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.682433 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-scripts" (OuterVolumeSpecName: "scripts") pod "f24e5a27-ec6a-4856-b8b4-3aa733f866fb" (UID: "f24e5a27-ec6a-4856-b8b4-3aa733f866fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.691810 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-kube-api-access-cg2xw" (OuterVolumeSpecName: "kube-api-access-cg2xw") pod "f24e5a27-ec6a-4856-b8b4-3aa733f866fb" (UID: "f24e5a27-ec6a-4856-b8b4-3aa733f866fb"). InnerVolumeSpecName "kube-api-access-cg2xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.694339 4889 scope.go:117] "RemoveContainer" containerID="41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.708220 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f24e5a27-ec6a-4856-b8b4-3aa733f866fb" (UID: "f24e5a27-ec6a-4856-b8b4-3aa733f866fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.744807 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="9b3de373-d67f-4cc7-ac6b-43b4b3f94242" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.761048 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f24e5a27-ec6a-4856-b8b4-3aa733f866fb" (UID: "f24e5a27-ec6a-4856-b8b4-3aa733f866fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.773623 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.773664 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg2xw\" (UniqueName: \"kubernetes.io/projected/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-kube-api-access-cg2xw\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.773681 4889 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.773694 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.785315 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-config-data" (OuterVolumeSpecName: "config-data") pod "f24e5a27-ec6a-4856-b8b4-3aa733f866fb" (UID: "f24e5a27-ec6a-4856-b8b4-3aa733f866fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.796541 4889 scope.go:117] "RemoveContainer" containerID="efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.823933 4889 scope.go:117] "RemoveContainer" containerID="842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.851315 4889 scope.go:117] "RemoveContainer" containerID="2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20" Nov 28 07:10:14 crc kubenswrapper[4889]: E1128 07:10:14.851778 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20\": container with ID starting with 2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20 not found: ID does not exist" containerID="2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.851834 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20"} err="failed to get container status \"2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20\": rpc error: code = NotFound desc = could not find container \"2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20\": container with ID starting with 2a4a1aa3c0dc6f4d1a129d0f98ef8830064f8f0b4d1b5c50f1179c87541bdf20 not found: ID does not exist" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.851866 4889 scope.go:117] "RemoveContainer" containerID="41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca" Nov 28 07:10:14 crc kubenswrapper[4889]: E1128 07:10:14.852312 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca\": container with ID starting with 41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca not found: ID does not exist" containerID="41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.852357 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca"} err="failed to get container status \"41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca\": rpc error: code = NotFound desc = could not find container \"41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca\": container with ID starting with 41175e70e5d6f781653622ebb265a290ad4c1f52e77d8de04d0784e741ee17ca not found: ID does not exist" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.852386 4889 scope.go:117] "RemoveContainer" containerID="efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9" Nov 28 07:10:14 crc kubenswrapper[4889]: E1128 07:10:14.852759 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9\": container with ID starting with efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9 not found: ID does not exist" containerID="efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.852789 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9"} err="failed to get container status \"efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9\": rpc error: code = NotFound desc = could not find container \"efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9\": container with ID starting with efc84fe150fa80b4c06fbc63b0a87f93b51a2d4e4218066bce7d60c3c25f0ab9 not found: ID does not exist" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.852806 4889 scope.go:117] "RemoveContainer" containerID="842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531" Nov 28 07:10:14 crc kubenswrapper[4889]: E1128 07:10:14.853205 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531\": container with ID starting with 842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531 not found: ID does not exist" containerID="842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.853233 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531"} err="failed to get container status \"842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531\": rpc error: code = NotFound desc = could not find container \"842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531\": container with ID starting with 842caad2eee33969b3fcd80d8959f2bfe9eeb44f7c77a1ea1838534662fab531 not found: ID does not exist" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.874969 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24e5a27-ec6a-4856-b8b4-3aa733f866fb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.976490 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:14 crc kubenswrapper[4889]: I1128 07:10:14.985124 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.002294 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:15 crc kubenswrapper[4889]: E1128 07:10:15.002749 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="proxy-httpd" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.002771 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="proxy-httpd" Nov 28 07:10:15 crc kubenswrapper[4889]: E1128 07:10:15.002797 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="sg-core" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.002805 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="sg-core" Nov 28 07:10:15 crc kubenswrapper[4889]: E1128 07:10:15.002820 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="ceilometer-central-agent" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.002828 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="ceilometer-central-agent" Nov 28 07:10:15 crc kubenswrapper[4889]: E1128 07:10:15.002846 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="ceilometer-notification-agent" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.002854 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="ceilometer-notification-agent" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.003079 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="ceilometer-notification-agent" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.003108 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="sg-core" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.003126 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="proxy-httpd" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.003141 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" containerName="ceilometer-central-agent" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.005204 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.009290 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.009405 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.009582 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.010523 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.035547 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.077753 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6x2x\" (UniqueName: \"kubernetes.io/projected/96c85297-311c-4694-b0be-96359d8dc923-kube-api-access-b6x2x\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.077812 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-run-httpd\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.077849 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-scripts\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.077872 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-log-httpd\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.078002 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.078037 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.078072 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-config-data\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.078095 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.179339 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.179395 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.179437 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-config-data\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.179466 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.179550 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6x2x\" (UniqueName: \"kubernetes.io/projected/96c85297-311c-4694-b0be-96359d8dc923-kube-api-access-b6x2x\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.179585 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-run-httpd\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.179621 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-scripts\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.179644 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-log-httpd\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.180261 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-log-httpd\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.180540 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-run-httpd\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.183881 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.184097 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.185071 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-scripts\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.185414 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-config-data\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.185626 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.197598 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6x2x\" (UniqueName: \"kubernetes.io/projected/96c85297-311c-4694-b0be-96359d8dc923-kube-api-access-b6x2x\") pod \"ceilometer-0\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.342636 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f24e5a27-ec6a-4856-b8b4-3aa733f866fb" path="/var/lib/kubelet/pods/f24e5a27-ec6a-4856-b8b4-3aa733f866fb/volumes" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.375379 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.812645 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:15 crc kubenswrapper[4889]: I1128 07:10:15.895502 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 07:10:16 crc kubenswrapper[4889]: I1128 07:10:16.284996 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 07:10:16 crc kubenswrapper[4889]: I1128 07:10:16.285051 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 07:10:16 crc kubenswrapper[4889]: I1128 07:10:16.661943 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerStarted","Data":"e139b5775b89f58e6ff18dfd61b0e6a5f74ae0ea524069043fe983320db67e42"} Nov 28 07:10:17 crc kubenswrapper[4889]: I1128 07:10:17.296862 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:17 crc kubenswrapper[4889]: I1128 07:10:17.296930 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:18 crc kubenswrapper[4889]: I1128 07:10:18.681889 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerStarted","Data":"65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb"} Nov 28 07:10:18 crc kubenswrapper[4889]: I1128 07:10:18.682411 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerStarted","Data":"d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35"} Nov 28 07:10:19 crc kubenswrapper[4889]: I1128 07:10:19.693263 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerStarted","Data":"e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0"} Nov 28 07:10:19 crc kubenswrapper[4889]: I1128 07:10:19.983957 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:10:19 crc kubenswrapper[4889]: I1128 07:10:19.984354 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:10:20 crc kubenswrapper[4889]: I1128 07:10:20.011252 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 07:10:20 crc kubenswrapper[4889]: I1128 07:10:20.053082 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 07:10:20 crc kubenswrapper[4889]: I1128 07:10:20.707812 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerStarted","Data":"8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c"} Nov 28 07:10:20 crc kubenswrapper[4889]: I1128 07:10:20.708129 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:10:20 crc kubenswrapper[4889]: I1128 07:10:20.740363 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.369769707 podStartE2EDuration="6.740342293s" podCreationTimestamp="2025-11-28 07:10:14 +0000 UTC" firstStartedPulling="2025-11-28 07:10:15.821241357 +0000 UTC m=+1338.791475512" lastFinishedPulling="2025-11-28 07:10:20.191813943 +0000 UTC m=+1343.162048098" observedRunningTime="2025-11-28 07:10:20.727517774 +0000 UTC m=+1343.697751939" watchObservedRunningTime="2025-11-28 07:10:20.740342293 +0000 UTC m=+1343.710576448" Nov 28 07:10:20 crc kubenswrapper[4889]: I1128 07:10:20.742483 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 07:10:21 crc kubenswrapper[4889]: I1128 07:10:21.066957 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:21 crc kubenswrapper[4889]: I1128 07:10:21.067414 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:21 crc kubenswrapper[4889]: I1128 07:10:21.083323 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 07:10:26 crc kubenswrapper[4889]: I1128 07:10:26.291942 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 07:10:26 crc kubenswrapper[4889]: I1128 07:10:26.292677 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 07:10:26 crc kubenswrapper[4889]: I1128 07:10:26.297117 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 07:10:26 crc kubenswrapper[4889]: I1128 07:10:26.299051 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.684399 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.731239 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-config-data\") pod \"b374792e-0c42-49c4-90ee-cdb3a872ecae\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.731381 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7zhz\" (UniqueName: \"kubernetes.io/projected/b374792e-0c42-49c4-90ee-cdb3a872ecae-kube-api-access-m7zhz\") pod \"b374792e-0c42-49c4-90ee-cdb3a872ecae\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.732153 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-combined-ca-bundle\") pod \"b374792e-0c42-49c4-90ee-cdb3a872ecae\" (UID: \"b374792e-0c42-49c4-90ee-cdb3a872ecae\") " Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.758168 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b374792e-0c42-49c4-90ee-cdb3a872ecae-kube-api-access-m7zhz" (OuterVolumeSpecName: "kube-api-access-m7zhz") pod "b374792e-0c42-49c4-90ee-cdb3a872ecae" (UID: "b374792e-0c42-49c4-90ee-cdb3a872ecae"). InnerVolumeSpecName "kube-api-access-m7zhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.824330 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-config-data" (OuterVolumeSpecName: "config-data") pod "b374792e-0c42-49c4-90ee-cdb3a872ecae" (UID: "b374792e-0c42-49c4-90ee-cdb3a872ecae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.831106 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b374792e-0c42-49c4-90ee-cdb3a872ecae" (UID: "b374792e-0c42-49c4-90ee-cdb3a872ecae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.831233 4889 generic.go:334] "Generic (PLEG): container finished" podID="b374792e-0c42-49c4-90ee-cdb3a872ecae" containerID="12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00" exitCode=137 Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.832082 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.832095 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b374792e-0c42-49c4-90ee-cdb3a872ecae","Type":"ContainerDied","Data":"12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00"} Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.832401 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b374792e-0c42-49c4-90ee-cdb3a872ecae","Type":"ContainerDied","Data":"d83644cf1c47a6d2d652a09d1975555ea4a9c41c414d13d51cf6dd1081e3d851"} Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.832514 4889 scope.go:117] "RemoveContainer" containerID="12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.833879 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.833903 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374792e-0c42-49c4-90ee-cdb3a872ecae-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.833913 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7zhz\" (UniqueName: \"kubernetes.io/projected/b374792e-0c42-49c4-90ee-cdb3a872ecae-kube-api-access-m7zhz\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.875916 4889 scope.go:117] "RemoveContainer" containerID="12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00" Nov 28 07:10:27 crc kubenswrapper[4889]: E1128 07:10:27.876666 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00\": container with ID starting with 12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00 not found: ID does not exist" containerID="12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.876732 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00"} err="failed to get container status \"12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00\": rpc error: code = NotFound desc = could not find container \"12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00\": container with ID starting with 12a66fbdd9902a93a57f891689a2271e435ff1a67f22f7e52474b30b57b25f00 not found: ID does not exist" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.878259 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.890152 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.902084 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:10:27 crc kubenswrapper[4889]: E1128 07:10:27.902494 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b374792e-0c42-49c4-90ee-cdb3a872ecae" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.902512 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b374792e-0c42-49c4-90ee-cdb3a872ecae" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.902691 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="b374792e-0c42-49c4-90ee-cdb3a872ecae" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.903324 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.906668 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.906971 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.907227 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.935874 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.935920 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.936043 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd87p\" (UniqueName: \"kubernetes.io/projected/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-kube-api-access-kd87p\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.936072 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.936188 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:27 crc kubenswrapper[4889]: I1128 07:10:27.937605 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.038176 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.038239 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.038267 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.038340 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd87p\" (UniqueName: \"kubernetes.io/projected/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-kube-api-access-kd87p\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.038368 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.041889 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.042382 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.042502 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.043885 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.058218 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd87p\" (UniqueName: \"kubernetes.io/projected/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-kube-api-access-kd87p\") pod \"nova-cell1-novncproxy-0\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.221009 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.701834 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:10:28 crc kubenswrapper[4889]: I1128 07:10:28.841937 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d578f2c7-2fee-4032-b63e-0dc8e5d1371f","Type":"ContainerStarted","Data":"735e526bec38bdc5a76467769391ff95a9ff6e58007eaa305c7bc47cdd7e8aad"} Nov 28 07:10:29 crc kubenswrapper[4889]: I1128 07:10:29.349050 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b374792e-0c42-49c4-90ee-cdb3a872ecae" path="/var/lib/kubelet/pods/b374792e-0c42-49c4-90ee-cdb3a872ecae/volumes" Nov 28 07:10:29 crc kubenswrapper[4889]: I1128 07:10:29.852700 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d578f2c7-2fee-4032-b63e-0dc8e5d1371f","Type":"ContainerStarted","Data":"29d04d773589b050b9a77e90cdf11d2996f36460fa7d4f5ca93bba075ac8e4fd"} Nov 28 07:10:29 crc kubenswrapper[4889]: I1128 07:10:29.881829 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.8818104079999998 podStartE2EDuration="2.881810408s" podCreationTimestamp="2025-11-28 07:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:29.86818144 +0000 UTC m=+1352.838415615" watchObservedRunningTime="2025-11-28 07:10:29.881810408 +0000 UTC m=+1352.852044563" Nov 28 07:10:29 crc kubenswrapper[4889]: I1128 07:10:29.987976 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 07:10:29 crc kubenswrapper[4889]: I1128 07:10:29.988055 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 07:10:29 crc kubenswrapper[4889]: I1128 07:10:29.988427 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 07:10:29 crc kubenswrapper[4889]: I1128 07:10:29.988484 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 07:10:29 crc kubenswrapper[4889]: I1128 07:10:29.993010 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:29.999999 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.216942 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d8fc4ccc9-wc58j"] Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.225738 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.237926 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8fc4ccc9-wc58j"] Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.282800 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.282861 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.282939 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-config\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.282972 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-svc\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.283014 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4tqk\" (UniqueName: \"kubernetes.io/projected/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-kube-api-access-l4tqk\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.283108 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.385102 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.385156 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.385198 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.385273 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-svc\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.385292 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-config\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.385318 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4tqk\" (UniqueName: \"kubernetes.io/projected/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-kube-api-access-l4tqk\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.385896 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.386297 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.386446 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-config\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.387117 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.387463 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-svc\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.406667 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4tqk\" (UniqueName: \"kubernetes.io/projected/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-kube-api-access-l4tqk\") pod \"dnsmasq-dns-5d8fc4ccc9-wc58j\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:30 crc kubenswrapper[4889]: I1128 07:10:30.548318 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:31 crc kubenswrapper[4889]: I1128 07:10:31.061842 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8fc4ccc9-wc58j"] Nov 28 07:10:31 crc kubenswrapper[4889]: I1128 07:10:31.889685 4889 generic.go:334] "Generic (PLEG): container finished" podID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" containerID="8e098cfc463032d4804741da5efedf5c6767301dd053935296f53b49eb1889cf" exitCode=0 Nov 28 07:10:31 crc kubenswrapper[4889]: I1128 07:10:31.890855 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" event={"ID":"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb","Type":"ContainerDied","Data":"8e098cfc463032d4804741da5efedf5c6767301dd053935296f53b49eb1889cf"} Nov 28 07:10:31 crc kubenswrapper[4889]: I1128 07:10:31.890916 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" event={"ID":"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb","Type":"ContainerStarted","Data":"e1687fc4fb4c147c5234087ff66a74472664de15b21ffc743ce7d798f4241678"} Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.579861 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.580389 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="ceilometer-central-agent" containerID="cri-o://d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35" gracePeriod=30 Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.580514 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="proxy-httpd" containerID="cri-o://8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c" gracePeriod=30 Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.580558 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="sg-core" containerID="cri-o://e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0" gracePeriod=30 Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.580591 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="ceilometer-notification-agent" containerID="cri-o://65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb" gracePeriod=30 Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.590951 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.191:3000/\": EOF" Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.910106 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" event={"ID":"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb","Type":"ContainerStarted","Data":"2365c7b49a5eac186167c60fde0c3ed33a799576881ae61606acd63b56a773ae"} Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.910534 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.914552 4889 generic.go:334] "Generic (PLEG): container finished" podID="96c85297-311c-4694-b0be-96359d8dc923" containerID="8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c" exitCode=0 Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.914793 4889 generic.go:334] "Generic (PLEG): container finished" podID="96c85297-311c-4694-b0be-96359d8dc923" containerID="e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0" exitCode=2 Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.915045 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerDied","Data":"8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c"} Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.915192 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerDied","Data":"e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0"} Nov 28 07:10:32 crc kubenswrapper[4889]: I1128 07:10:32.940041 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" podStartSLOduration=2.940025062 podStartE2EDuration="2.940025062s" podCreationTimestamp="2025-11-28 07:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:32.93536035 +0000 UTC m=+1355.905594505" watchObservedRunningTime="2025-11-28 07:10:32.940025062 +0000 UTC m=+1355.910259207" Nov 28 07:10:33 crc kubenswrapper[4889]: I1128 07:10:33.078732 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:33 crc kubenswrapper[4889]: I1128 07:10:33.078940 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-log" containerID="cri-o://4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd" gracePeriod=30 Nov 28 07:10:33 crc kubenswrapper[4889]: I1128 07:10:33.079090 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-api" containerID="cri-o://5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d" gracePeriod=30 Nov 28 07:10:33 crc kubenswrapper[4889]: I1128 07:10:33.221164 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:33 crc kubenswrapper[4889]: I1128 07:10:33.926482 4889 generic.go:334] "Generic (PLEG): container finished" podID="96c85297-311c-4694-b0be-96359d8dc923" containerID="d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35" exitCode=0 Nov 28 07:10:33 crc kubenswrapper[4889]: I1128 07:10:33.926543 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerDied","Data":"d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35"} Nov 28 07:10:33 crc kubenswrapper[4889]: I1128 07:10:33.928766 4889 generic.go:334] "Generic (PLEG): container finished" podID="ba193675-44c4-4a85-b16d-66e0a5102004" containerID="4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd" exitCode=143 Nov 28 07:10:33 crc kubenswrapper[4889]: I1128 07:10:33.928831 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba193675-44c4-4a85-b16d-66e0a5102004","Type":"ContainerDied","Data":"4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd"} Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.350776 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.511869 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-log-httpd\") pod \"96c85297-311c-4694-b0be-96359d8dc923\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.511985 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-config-data\") pod \"96c85297-311c-4694-b0be-96359d8dc923\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.512140 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-combined-ca-bundle\") pod \"96c85297-311c-4694-b0be-96359d8dc923\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.512314 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-run-httpd\") pod \"96c85297-311c-4694-b0be-96359d8dc923\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.512418 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-ceilometer-tls-certs\") pod \"96c85297-311c-4694-b0be-96359d8dc923\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.512485 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-scripts\") pod \"96c85297-311c-4694-b0be-96359d8dc923\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.512524 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-sg-core-conf-yaml\") pod \"96c85297-311c-4694-b0be-96359d8dc923\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.512559 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6x2x\" (UniqueName: \"kubernetes.io/projected/96c85297-311c-4694-b0be-96359d8dc923-kube-api-access-b6x2x\") pod \"96c85297-311c-4694-b0be-96359d8dc923\" (UID: \"96c85297-311c-4694-b0be-96359d8dc923\") " Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.512632 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "96c85297-311c-4694-b0be-96359d8dc923" (UID: "96c85297-311c-4694-b0be-96359d8dc923"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.513370 4889 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.514503 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "96c85297-311c-4694-b0be-96359d8dc923" (UID: "96c85297-311c-4694-b0be-96359d8dc923"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.518437 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-scripts" (OuterVolumeSpecName: "scripts") pod "96c85297-311c-4694-b0be-96359d8dc923" (UID: "96c85297-311c-4694-b0be-96359d8dc923"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.521991 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c85297-311c-4694-b0be-96359d8dc923-kube-api-access-b6x2x" (OuterVolumeSpecName: "kube-api-access-b6x2x") pod "96c85297-311c-4694-b0be-96359d8dc923" (UID: "96c85297-311c-4694-b0be-96359d8dc923"). InnerVolumeSpecName "kube-api-access-b6x2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.551850 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "96c85297-311c-4694-b0be-96359d8dc923" (UID: "96c85297-311c-4694-b0be-96359d8dc923"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.573752 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "96c85297-311c-4694-b0be-96359d8dc923" (UID: "96c85297-311c-4694-b0be-96359d8dc923"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.595139 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96c85297-311c-4694-b0be-96359d8dc923" (UID: "96c85297-311c-4694-b0be-96359d8dc923"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.615871 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.615910 4889 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96c85297-311c-4694-b0be-96359d8dc923-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.615922 4889 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.615936 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.615948 4889 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.615959 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6x2x\" (UniqueName: \"kubernetes.io/projected/96c85297-311c-4694-b0be-96359d8dc923-kube-api-access-b6x2x\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.633612 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-config-data" (OuterVolumeSpecName: "config-data") pod "96c85297-311c-4694-b0be-96359d8dc923" (UID: "96c85297-311c-4694-b0be-96359d8dc923"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.717171 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c85297-311c-4694-b0be-96359d8dc923-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.946393 4889 generic.go:334] "Generic (PLEG): container finished" podID="96c85297-311c-4694-b0be-96359d8dc923" containerID="65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb" exitCode=0 Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.946433 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerDied","Data":"65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb"} Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.946459 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96c85297-311c-4694-b0be-96359d8dc923","Type":"ContainerDied","Data":"e139b5775b89f58e6ff18dfd61b0e6a5f74ae0ea524069043fe983320db67e42"} Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.946475 4889 scope.go:117] "RemoveContainer" containerID="8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.946622 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.986910 4889 scope.go:117] "RemoveContainer" containerID="e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0" Nov 28 07:10:35 crc kubenswrapper[4889]: I1128 07:10:35.993301 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.007276 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.026722 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:36 crc kubenswrapper[4889]: E1128 07:10:36.027182 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="ceilometer-central-agent" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.027202 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="ceilometer-central-agent" Nov 28 07:10:36 crc kubenswrapper[4889]: E1128 07:10:36.027211 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="proxy-httpd" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.027217 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="proxy-httpd" Nov 28 07:10:36 crc kubenswrapper[4889]: E1128 07:10:36.027232 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="sg-core" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.027239 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="sg-core" Nov 28 07:10:36 crc kubenswrapper[4889]: E1128 07:10:36.027262 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="ceilometer-notification-agent" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.027268 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="ceilometer-notification-agent" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.027431 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="ceilometer-notification-agent" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.027444 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="sg-core" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.027460 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="ceilometer-central-agent" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.027476 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c85297-311c-4694-b0be-96359d8dc923" containerName="proxy-httpd" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.030275 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.032861 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.032912 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.036019 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.071427 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.074180 4889 scope.go:117] "RemoveContainer" containerID="65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.092392 4889 scope.go:117] "RemoveContainer" containerID="d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.108888 4889 scope.go:117] "RemoveContainer" containerID="8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c" Nov 28 07:10:36 crc kubenswrapper[4889]: E1128 07:10:36.109402 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c\": container with ID starting with 8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c not found: ID does not exist" containerID="8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.109435 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c"} err="failed to get container status \"8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c\": rpc error: code = NotFound desc = could not find container \"8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c\": container with ID starting with 8a6c8725af8a3d1298b6826196670e3811ec7b42146531e0100b4089aa63428c not found: ID does not exist" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.109495 4889 scope.go:117] "RemoveContainer" containerID="e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0" Nov 28 07:10:36 crc kubenswrapper[4889]: E1128 07:10:36.109969 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0\": container with ID starting with e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0 not found: ID does not exist" containerID="e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.110009 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0"} err="failed to get container status \"e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0\": rpc error: code = NotFound desc = could not find container \"e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0\": container with ID starting with e4715b8f9f0cc7c254b3b0ca4a45404bdb5e6a86b49ab457391274ef5f6f13f0 not found: ID does not exist" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.110036 4889 scope.go:117] "RemoveContainer" containerID="65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb" Nov 28 07:10:36 crc kubenswrapper[4889]: E1128 07:10:36.110362 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb\": container with ID starting with 65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb not found: ID does not exist" containerID="65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.110448 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb"} err="failed to get container status \"65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb\": rpc error: code = NotFound desc = could not find container \"65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb\": container with ID starting with 65ce111f4f67876dac6da4671f828ccef13dc141fa702a7ef79e72c79350d2cb not found: ID does not exist" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.110470 4889 scope.go:117] "RemoveContainer" containerID="d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35" Nov 28 07:10:36 crc kubenswrapper[4889]: E1128 07:10:36.110769 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35\": container with ID starting with d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35 not found: ID does not exist" containerID="d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.110802 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35"} err="failed to get container status \"d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35\": rpc error: code = NotFound desc = could not find container \"d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35\": container with ID starting with d5f9d279066afcb689787c2eb5a127c50f5ed06dad964011fc8a6620b7399d35 not found: ID does not exist" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.124115 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.124272 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qr4\" (UniqueName: \"kubernetes.io/projected/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-kube-api-access-r6qr4\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.124336 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-log-httpd\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.124392 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.124437 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.124457 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-scripts\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.124480 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-run-httpd\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.124788 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-config-data\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227138 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qr4\" (UniqueName: \"kubernetes.io/projected/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-kube-api-access-r6qr4\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227200 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-log-httpd\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227237 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227288 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227311 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-scripts\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227339 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-run-httpd\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227421 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-config-data\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227474 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227926 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-log-httpd\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.227953 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-run-httpd\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.232643 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.232752 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.232834 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-config-data\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.234195 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-scripts\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.245485 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.246876 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qr4\" (UniqueName: \"kubernetes.io/projected/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-kube-api-access-r6qr4\") pod \"ceilometer-0\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.374566 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.584795 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.736015 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-config-data\") pod \"ba193675-44c4-4a85-b16d-66e0a5102004\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.736234 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba193675-44c4-4a85-b16d-66e0a5102004-logs\") pod \"ba193675-44c4-4a85-b16d-66e0a5102004\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.736283 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-combined-ca-bundle\") pod \"ba193675-44c4-4a85-b16d-66e0a5102004\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.736305 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkxhr\" (UniqueName: \"kubernetes.io/projected/ba193675-44c4-4a85-b16d-66e0a5102004-kube-api-access-zkxhr\") pod \"ba193675-44c4-4a85-b16d-66e0a5102004\" (UID: \"ba193675-44c4-4a85-b16d-66e0a5102004\") " Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.737388 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba193675-44c4-4a85-b16d-66e0a5102004-logs" (OuterVolumeSpecName: "logs") pod "ba193675-44c4-4a85-b16d-66e0a5102004" (UID: "ba193675-44c4-4a85-b16d-66e0a5102004"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.742022 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba193675-44c4-4a85-b16d-66e0a5102004-kube-api-access-zkxhr" (OuterVolumeSpecName: "kube-api-access-zkxhr") pod "ba193675-44c4-4a85-b16d-66e0a5102004" (UID: "ba193675-44c4-4a85-b16d-66e0a5102004"). InnerVolumeSpecName "kube-api-access-zkxhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.763844 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba193675-44c4-4a85-b16d-66e0a5102004" (UID: "ba193675-44c4-4a85-b16d-66e0a5102004"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.775883 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-config-data" (OuterVolumeSpecName: "config-data") pod "ba193675-44c4-4a85-b16d-66e0a5102004" (UID: "ba193675-44c4-4a85-b16d-66e0a5102004"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.818284 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.825482 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.838019 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba193675-44c4-4a85-b16d-66e0a5102004-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.838053 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.838064 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkxhr\" (UniqueName: \"kubernetes.io/projected/ba193675-44c4-4a85-b16d-66e0a5102004-kube-api-access-zkxhr\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.838073 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba193675-44c4-4a85-b16d-66e0a5102004-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.957207 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerStarted","Data":"f119b919f166af15d837a07f7292e13a351fe294e3d6ace6be2440be956f3a17"} Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.959816 4889 generic.go:334] "Generic (PLEG): container finished" podID="ba193675-44c4-4a85-b16d-66e0a5102004" containerID="5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d" exitCode=0 Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.959855 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba193675-44c4-4a85-b16d-66e0a5102004","Type":"ContainerDied","Data":"5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d"} Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.959863 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.959885 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba193675-44c4-4a85-b16d-66e0a5102004","Type":"ContainerDied","Data":"e05a8f56feadc5b238d2570f0ed73124bb91ac17253224357cf423aa2ef57f6d"} Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.959907 4889 scope.go:117] "RemoveContainer" containerID="5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.982431 4889 scope.go:117] "RemoveContainer" containerID="4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd" Nov 28 07:10:36 crc kubenswrapper[4889]: I1128 07:10:36.999333 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.009001 4889 scope.go:117] "RemoveContainer" containerID="5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d" Nov 28 07:10:37 crc kubenswrapper[4889]: E1128 07:10:37.009558 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d\": container with ID starting with 5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d not found: ID does not exist" containerID="5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.009668 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d"} err="failed to get container status \"5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d\": rpc error: code = NotFound desc = could not find container \"5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d\": container with ID starting with 5f83c7609da20892335b1b6cf4d3ce71baef5cadbb4963f9bc24f0d5a0bd545d not found: ID does not exist" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.009831 4889 scope.go:117] "RemoveContainer" containerID="4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd" Nov 28 07:10:37 crc kubenswrapper[4889]: E1128 07:10:37.010808 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd\": container with ID starting with 4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd not found: ID does not exist" containerID="4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.010851 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd"} err="failed to get container status \"4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd\": rpc error: code = NotFound desc = could not find container \"4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd\": container with ID starting with 4d62e46ffe11802262262fae01270db93cfa1756488b831e76f6433aa9c9eddd not found: ID does not exist" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.014177 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.024785 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:37 crc kubenswrapper[4889]: E1128 07:10:37.025317 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-api" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.025340 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-api" Nov 28 07:10:37 crc kubenswrapper[4889]: E1128 07:10:37.025374 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-log" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.025383 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-log" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.025616 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-log" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.025646 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" containerName="nova-api-api" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.027014 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.030476 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.030792 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.031074 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.033618 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.143943 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e44fe1-819e-47c5-b28e-737474eac475-logs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.144094 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-config-data\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.144166 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.144468 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.144503 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.144606 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p586g\" (UniqueName: \"kubernetes.io/projected/a3e44fe1-819e-47c5-b28e-737474eac475-kube-api-access-p586g\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.246929 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e44fe1-819e-47c5-b28e-737474eac475-logs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.247006 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-config-data\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.247034 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.247088 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.247105 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.247156 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p586g\" (UniqueName: \"kubernetes.io/projected/a3e44fe1-819e-47c5-b28e-737474eac475-kube-api-access-p586g\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.247928 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e44fe1-819e-47c5-b28e-737474eac475-logs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.251498 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.251965 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.252177 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.252845 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-config-data\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.268346 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p586g\" (UniqueName: \"kubernetes.io/projected/a3e44fe1-819e-47c5-b28e-737474eac475-kube-api-access-p586g\") pod \"nova-api-0\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.354729 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c85297-311c-4694-b0be-96359d8dc923" path="/var/lib/kubelet/pods/96c85297-311c-4694-b0be-96359d8dc923/volumes" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.355568 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba193675-44c4-4a85-b16d-66e0a5102004" path="/var/lib/kubelet/pods/ba193675-44c4-4a85-b16d-66e0a5102004/volumes" Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.371770 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:37 crc kubenswrapper[4889]: W1128 07:10:37.823619 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3e44fe1_819e_47c5_b28e_737474eac475.slice/crio-6e6459c7ebea20344bd802249b6f12f846fff30f039848bbb75751e5efa326e9 WatchSource:0}: Error finding container 6e6459c7ebea20344bd802249b6f12f846fff30f039848bbb75751e5efa326e9: Status 404 returned error can't find the container with id 6e6459c7ebea20344bd802249b6f12f846fff30f039848bbb75751e5efa326e9 Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.825430 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.974148 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerStarted","Data":"5b791b0ee5ba22707eb669b678aaed1200bebe3fe2bc24e3b032cc3e5c25310a"} Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.975230 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3e44fe1-819e-47c5-b28e-737474eac475","Type":"ContainerStarted","Data":"0e4b899f7f214990977f6362d05ebc0bdac7bd8b89c56c2811cac2057ab5830a"} Nov 28 07:10:37 crc kubenswrapper[4889]: I1128 07:10:37.975254 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3e44fe1-819e-47c5-b28e-737474eac475","Type":"ContainerStarted","Data":"6e6459c7ebea20344bd802249b6f12f846fff30f039848bbb75751e5efa326e9"} Nov 28 07:10:38 crc kubenswrapper[4889]: I1128 07:10:38.222737 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:38 crc kubenswrapper[4889]: I1128 07:10:38.243424 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:38 crc kubenswrapper[4889]: I1128 07:10:38.983237 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerStarted","Data":"206b0078bfc628bc38b7ee44283277823901e99e7548ec8345145fcecd5a4005"} Nov 28 07:10:38 crc kubenswrapper[4889]: I1128 07:10:38.984282 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3e44fe1-819e-47c5-b28e-737474eac475","Type":"ContainerStarted","Data":"a7e79cfbb8ab4fd42fccbf123ae90d0b8a7b68059f05ba2ab0d416261563ed69"} Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.005674 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.005652809 podStartE2EDuration="3.005652809s" podCreationTimestamp="2025-11-28 07:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:39.003327463 +0000 UTC m=+1361.973561628" watchObservedRunningTime="2025-11-28 07:10:39.005652809 +0000 UTC m=+1361.975886974" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.010750 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.215353 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-r99w7"] Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.216960 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.221341 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.221563 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.224611 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-config-data\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.224667 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-scripts\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.224839 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.224884 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5w8\" (UniqueName: \"kubernetes.io/projected/ba3ba7d9-162a-4393-804b-0713bcc88a9c-kube-api-access-7d5w8\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.227810 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-r99w7"] Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.326050 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.326124 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5w8\" (UniqueName: \"kubernetes.io/projected/ba3ba7d9-162a-4393-804b-0713bcc88a9c-kube-api-access-7d5w8\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.326157 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-config-data\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.326196 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-scripts\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.335950 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.336340 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-scripts\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.336986 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-config-data\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.350069 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5w8\" (UniqueName: \"kubernetes.io/projected/ba3ba7d9-162a-4393-804b-0713bcc88a9c-kube-api-access-7d5w8\") pod \"nova-cell1-cell-mapping-r99w7\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:39 crc kubenswrapper[4889]: I1128 07:10:39.583298 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:40 crc kubenswrapper[4889]: I1128 07:10:40.001033 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerStarted","Data":"eafa471d1e83e5c4174ab8b6222ceaeab6bcb18dd8a04cfa44cdd7c9aaae7176"} Nov 28 07:10:40 crc kubenswrapper[4889]: W1128 07:10:40.010445 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba3ba7d9_162a_4393_804b_0713bcc88a9c.slice/crio-6c53fb76e65301dcd2477a6b9072c7caacbd4c3bf0489f610bf63c227aa5648f WatchSource:0}: Error finding container 6c53fb76e65301dcd2477a6b9072c7caacbd4c3bf0489f610bf63c227aa5648f: Status 404 returned error can't find the container with id 6c53fb76e65301dcd2477a6b9072c7caacbd4c3bf0489f610bf63c227aa5648f Nov 28 07:10:40 crc kubenswrapper[4889]: I1128 07:10:40.014443 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-r99w7"] Nov 28 07:10:40 crc kubenswrapper[4889]: I1128 07:10:40.549814 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:10:40 crc kubenswrapper[4889]: I1128 07:10:40.609642 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5594d9b959-mlsc9"] Nov 28 07:10:40 crc kubenswrapper[4889]: I1128 07:10:40.609919 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" podUID="a16d6068-941b-4d5f-a74c-42e363182095" containerName="dnsmasq-dns" containerID="cri-o://29c66f3ede3318f319febf94be785cfc6184dc3969697eb9b727d92f7dbc4ea3" gracePeriod=10 Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.012143 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerStarted","Data":"0c2cdf84f726e62f45e47d4328d523cb24c975652d68073e00aa714625b828c0"} Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.013883 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.024907 4889 generic.go:334] "Generic (PLEG): container finished" podID="a16d6068-941b-4d5f-a74c-42e363182095" containerID="29c66f3ede3318f319febf94be785cfc6184dc3969697eb9b727d92f7dbc4ea3" exitCode=0 Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.024989 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" event={"ID":"a16d6068-941b-4d5f-a74c-42e363182095","Type":"ContainerDied","Data":"29c66f3ede3318f319febf94be785cfc6184dc3969697eb9b727d92f7dbc4ea3"} Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.032246 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r99w7" event={"ID":"ba3ba7d9-162a-4393-804b-0713bcc88a9c","Type":"ContainerStarted","Data":"46e2b61c8e6ecfe9ae9928060f1a929cb5525c2e321d7fb2129b0bb6ab9cc8a8"} Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.032294 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r99w7" event={"ID":"ba3ba7d9-162a-4393-804b-0713bcc88a9c","Type":"ContainerStarted","Data":"6c53fb76e65301dcd2477a6b9072c7caacbd4c3bf0489f610bf63c227aa5648f"} Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.048586 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.509385472 podStartE2EDuration="5.048565138s" podCreationTimestamp="2025-11-28 07:10:36 +0000 UTC" firstStartedPulling="2025-11-28 07:10:36.825279499 +0000 UTC m=+1359.795513654" lastFinishedPulling="2025-11-28 07:10:40.364459165 +0000 UTC m=+1363.334693320" observedRunningTime="2025-11-28 07:10:41.041618025 +0000 UTC m=+1364.011852200" watchObservedRunningTime="2025-11-28 07:10:41.048565138 +0000 UTC m=+1364.018799293" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.069009 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-r99w7" podStartSLOduration=2.068994447 podStartE2EDuration="2.068994447s" podCreationTimestamp="2025-11-28 07:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:41.064480465 +0000 UTC m=+1364.034714620" watchObservedRunningTime="2025-11-28 07:10:41.068994447 +0000 UTC m=+1364.039228602" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.105263 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.262385 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-sb\") pod \"a16d6068-941b-4d5f-a74c-42e363182095\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.262538 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-nb\") pod \"a16d6068-941b-4d5f-a74c-42e363182095\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.262635 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-swift-storage-0\") pod \"a16d6068-941b-4d5f-a74c-42e363182095\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.262667 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljdbt\" (UniqueName: \"kubernetes.io/projected/a16d6068-941b-4d5f-a74c-42e363182095-kube-api-access-ljdbt\") pod \"a16d6068-941b-4d5f-a74c-42e363182095\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.262721 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-config\") pod \"a16d6068-941b-4d5f-a74c-42e363182095\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.262833 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-svc\") pod \"a16d6068-941b-4d5f-a74c-42e363182095\" (UID: \"a16d6068-941b-4d5f-a74c-42e363182095\") " Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.266885 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a16d6068-941b-4d5f-a74c-42e363182095-kube-api-access-ljdbt" (OuterVolumeSpecName: "kube-api-access-ljdbt") pod "a16d6068-941b-4d5f-a74c-42e363182095" (UID: "a16d6068-941b-4d5f-a74c-42e363182095"). InnerVolumeSpecName "kube-api-access-ljdbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.314000 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a16d6068-941b-4d5f-a74c-42e363182095" (UID: "a16d6068-941b-4d5f-a74c-42e363182095"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.314923 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a16d6068-941b-4d5f-a74c-42e363182095" (UID: "a16d6068-941b-4d5f-a74c-42e363182095"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.319827 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-config" (OuterVolumeSpecName: "config") pod "a16d6068-941b-4d5f-a74c-42e363182095" (UID: "a16d6068-941b-4d5f-a74c-42e363182095"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.322998 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a16d6068-941b-4d5f-a74c-42e363182095" (UID: "a16d6068-941b-4d5f-a74c-42e363182095"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.325296 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a16d6068-941b-4d5f-a74c-42e363182095" (UID: "a16d6068-941b-4d5f-a74c-42e363182095"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.365283 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.365320 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.365333 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.365346 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljdbt\" (UniqueName: \"kubernetes.io/projected/a16d6068-941b-4d5f-a74c-42e363182095-kube-api-access-ljdbt\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.365359 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:41 crc kubenswrapper[4889]: I1128 07:10:41.365370 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a16d6068-941b-4d5f-a74c-42e363182095-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.042324 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" event={"ID":"a16d6068-941b-4d5f-a74c-42e363182095","Type":"ContainerDied","Data":"d50d03cd0ef1d97a5d2a01f7238a5febfecac201dace7765fbd6a1518cf87ddf"} Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.042394 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5594d9b959-mlsc9" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.042734 4889 scope.go:117] "RemoveContainer" containerID="29c66f3ede3318f319febf94be785cfc6184dc3969697eb9b727d92f7dbc4ea3" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.066104 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5594d9b959-mlsc9"] Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.070852 4889 scope.go:117] "RemoveContainer" containerID="3dc327a27cf51a0ade520631c8b7fc4d421557f1c38ffb141e797fc692974432" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.076357 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5594d9b959-mlsc9"] Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.463595 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wm65j"] Nov 28 07:10:42 crc kubenswrapper[4889]: E1128 07:10:42.464213 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16d6068-941b-4d5f-a74c-42e363182095" containerName="init" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.464230 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16d6068-941b-4d5f-a74c-42e363182095" containerName="init" Nov 28 07:10:42 crc kubenswrapper[4889]: E1128 07:10:42.464250 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16d6068-941b-4d5f-a74c-42e363182095" containerName="dnsmasq-dns" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.464257 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16d6068-941b-4d5f-a74c-42e363182095" containerName="dnsmasq-dns" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.464452 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16d6068-941b-4d5f-a74c-42e363182095" containerName="dnsmasq-dns" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.465717 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.477821 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wm65j"] Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.482527 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-utilities\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.482561 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzmrq\" (UniqueName: \"kubernetes.io/projected/ae6139fc-fc78-4423-8a8d-d526220b6d2a-kube-api-access-nzmrq\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.482626 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-catalog-content\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.584358 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-utilities\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.584652 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzmrq\" (UniqueName: \"kubernetes.io/projected/ae6139fc-fc78-4423-8a8d-d526220b6d2a-kube-api-access-nzmrq\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.584725 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-catalog-content\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.585184 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-utilities\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.585218 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-catalog-content\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.607660 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzmrq\" (UniqueName: \"kubernetes.io/projected/ae6139fc-fc78-4423-8a8d-d526220b6d2a-kube-api-access-nzmrq\") pod \"redhat-operators-wm65j\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:42 crc kubenswrapper[4889]: I1128 07:10:42.784301 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:43 crc kubenswrapper[4889]: I1128 07:10:43.247863 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wm65j"] Nov 28 07:10:43 crc kubenswrapper[4889]: I1128 07:10:43.364593 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a16d6068-941b-4d5f-a74c-42e363182095" path="/var/lib/kubelet/pods/a16d6068-941b-4d5f-a74c-42e363182095/volumes" Nov 28 07:10:44 crc kubenswrapper[4889]: I1128 07:10:44.060571 4889 generic.go:334] "Generic (PLEG): container finished" podID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerID="512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8" exitCode=0 Nov 28 07:10:44 crc kubenswrapper[4889]: I1128 07:10:44.060785 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wm65j" event={"ID":"ae6139fc-fc78-4423-8a8d-d526220b6d2a","Type":"ContainerDied","Data":"512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8"} Nov 28 07:10:44 crc kubenswrapper[4889]: I1128 07:10:44.060929 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wm65j" event={"ID":"ae6139fc-fc78-4423-8a8d-d526220b6d2a","Type":"ContainerStarted","Data":"6aa0df901c4f4b231cdf9650d0254e4b18bb6a80bb221ff585fa59269eec4b4f"} Nov 28 07:10:46 crc kubenswrapper[4889]: I1128 07:10:46.081459 4889 generic.go:334] "Generic (PLEG): container finished" podID="ba3ba7d9-162a-4393-804b-0713bcc88a9c" containerID="46e2b61c8e6ecfe9ae9928060f1a929cb5525c2e321d7fb2129b0bb6ab9cc8a8" exitCode=0 Nov 28 07:10:46 crc kubenswrapper[4889]: I1128 07:10:46.081566 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r99w7" event={"ID":"ba3ba7d9-162a-4393-804b-0713bcc88a9c","Type":"ContainerDied","Data":"46e2b61c8e6ecfe9ae9928060f1a929cb5525c2e321d7fb2129b0bb6ab9cc8a8"} Nov 28 07:10:46 crc kubenswrapper[4889]: I1128 07:10:46.084295 4889 generic.go:334] "Generic (PLEG): container finished" podID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerID="ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3" exitCode=0 Nov 28 07:10:46 crc kubenswrapper[4889]: I1128 07:10:46.084349 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wm65j" event={"ID":"ae6139fc-fc78-4423-8a8d-d526220b6d2a","Type":"ContainerDied","Data":"ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3"} Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.095768 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wm65j" event={"ID":"ae6139fc-fc78-4423-8a8d-d526220b6d2a","Type":"ContainerStarted","Data":"78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79"} Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.120374 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wm65j" podStartSLOduration=2.340386478 podStartE2EDuration="5.120355998s" podCreationTimestamp="2025-11-28 07:10:42 +0000 UTC" firstStartedPulling="2025-11-28 07:10:44.062287076 +0000 UTC m=+1367.032521231" lastFinishedPulling="2025-11-28 07:10:46.842256596 +0000 UTC m=+1369.812490751" observedRunningTime="2025-11-28 07:10:47.112751119 +0000 UTC m=+1370.082985334" watchObservedRunningTime="2025-11-28 07:10:47.120355998 +0000 UTC m=+1370.090590153" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.376781 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.376832 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.459607 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.534552 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-config-data\") pod \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.534639 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d5w8\" (UniqueName: \"kubernetes.io/projected/ba3ba7d9-162a-4393-804b-0713bcc88a9c-kube-api-access-7d5w8\") pod \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.534813 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-scripts\") pod \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.534851 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-combined-ca-bundle\") pod \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\" (UID: \"ba3ba7d9-162a-4393-804b-0713bcc88a9c\") " Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.541357 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-scripts" (OuterVolumeSpecName: "scripts") pod "ba3ba7d9-162a-4393-804b-0713bcc88a9c" (UID: "ba3ba7d9-162a-4393-804b-0713bcc88a9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.561217 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3ba7d9-162a-4393-804b-0713bcc88a9c-kube-api-access-7d5w8" (OuterVolumeSpecName: "kube-api-access-7d5w8") pod "ba3ba7d9-162a-4393-804b-0713bcc88a9c" (UID: "ba3ba7d9-162a-4393-804b-0713bcc88a9c"). InnerVolumeSpecName "kube-api-access-7d5w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.576621 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba3ba7d9-162a-4393-804b-0713bcc88a9c" (UID: "ba3ba7d9-162a-4393-804b-0713bcc88a9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.597603 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-config-data" (OuterVolumeSpecName: "config-data") pod "ba3ba7d9-162a-4393-804b-0713bcc88a9c" (UID: "ba3ba7d9-162a-4393-804b-0713bcc88a9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.636949 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.636988 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d5w8\" (UniqueName: \"kubernetes.io/projected/ba3ba7d9-162a-4393-804b-0713bcc88a9c-kube-api-access-7d5w8\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.636999 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:47 crc kubenswrapper[4889]: I1128 07:10:47.637007 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3ba7d9-162a-4393-804b-0713bcc88a9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.108018 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r99w7" Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.110850 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r99w7" event={"ID":"ba3ba7d9-162a-4393-804b-0713bcc88a9c","Type":"ContainerDied","Data":"6c53fb76e65301dcd2477a6b9072c7caacbd4c3bf0489f610bf63c227aa5648f"} Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.110904 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c53fb76e65301dcd2477a6b9072c7caacbd4c3bf0489f610bf63c227aa5648f" Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.386063 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.195:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.394865 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.195:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.399868 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.400118 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-log" containerID="cri-o://0e4b899f7f214990977f6362d05ebc0bdac7bd8b89c56c2811cac2057ab5830a" gracePeriod=30 Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.400666 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-api" containerID="cri-o://a7e79cfbb8ab4fd42fccbf123ae90d0b8a7b68059f05ba2ab0d416261563ed69" gracePeriod=30 Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.413265 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.413537 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="669135df-f9c4-4aab-803b-a1732d33fd42" containerName="nova-scheduler-scheduler" containerID="cri-o://e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1" gracePeriod=30 Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.429285 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.429577 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-log" containerID="cri-o://9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0" gracePeriod=30 Nov 28 07:10:48 crc kubenswrapper[4889]: I1128 07:10:48.429681 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-metadata" containerID="cri-o://1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26" gracePeriod=30 Nov 28 07:10:49 crc kubenswrapper[4889]: I1128 07:10:49.117865 4889 generic.go:334] "Generic (PLEG): container finished" podID="a3e44fe1-819e-47c5-b28e-737474eac475" containerID="0e4b899f7f214990977f6362d05ebc0bdac7bd8b89c56c2811cac2057ab5830a" exitCode=143 Nov 28 07:10:49 crc kubenswrapper[4889]: I1128 07:10:49.117958 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3e44fe1-819e-47c5-b28e-737474eac475","Type":"ContainerDied","Data":"0e4b899f7f214990977f6362d05ebc0bdac7bd8b89c56c2811cac2057ab5830a"} Nov 28 07:10:49 crc kubenswrapper[4889]: I1128 07:10:49.119956 4889 generic.go:334] "Generic (PLEG): container finished" podID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerID="9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0" exitCode=143 Nov 28 07:10:49 crc kubenswrapper[4889]: I1128 07:10:49.119997 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec","Type":"ContainerDied","Data":"9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0"} Nov 28 07:10:50 crc kubenswrapper[4889]: E1128 07:10:50.013227 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1 is running failed: container process not found" containerID="e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:10:50 crc kubenswrapper[4889]: E1128 07:10:50.027488 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1 is running failed: container process not found" containerID="e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:10:50 crc kubenswrapper[4889]: E1128 07:10:50.028567 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1 is running failed: container process not found" containerID="e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:10:50 crc kubenswrapper[4889]: E1128 07:10:50.028606 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="669135df-f9c4-4aab-803b-a1732d33fd42" containerName="nova-scheduler-scheduler" Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.130185 4889 generic.go:334] "Generic (PLEG): container finished" podID="669135df-f9c4-4aab-803b-a1732d33fd42" containerID="e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1" exitCode=0 Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.130238 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"669135df-f9c4-4aab-803b-a1732d33fd42","Type":"ContainerDied","Data":"e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1"} Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.242880 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.295209 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-combined-ca-bundle\") pod \"669135df-f9c4-4aab-803b-a1732d33fd42\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.295327 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-config-data\") pod \"669135df-f9c4-4aab-803b-a1732d33fd42\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.295364 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpbg6\" (UniqueName: \"kubernetes.io/projected/669135df-f9c4-4aab-803b-a1732d33fd42-kube-api-access-lpbg6\") pod \"669135df-f9c4-4aab-803b-a1732d33fd42\" (UID: \"669135df-f9c4-4aab-803b-a1732d33fd42\") " Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.303004 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669135df-f9c4-4aab-803b-a1732d33fd42-kube-api-access-lpbg6" (OuterVolumeSpecName: "kube-api-access-lpbg6") pod "669135df-f9c4-4aab-803b-a1732d33fd42" (UID: "669135df-f9c4-4aab-803b-a1732d33fd42"). InnerVolumeSpecName "kube-api-access-lpbg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.325660 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-config-data" (OuterVolumeSpecName: "config-data") pod "669135df-f9c4-4aab-803b-a1732d33fd42" (UID: "669135df-f9c4-4aab-803b-a1732d33fd42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.330955 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "669135df-f9c4-4aab-803b-a1732d33fd42" (UID: "669135df-f9c4-4aab-803b-a1732d33fd42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.400432 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.400462 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669135df-f9c4-4aab-803b-a1732d33fd42-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:50 crc kubenswrapper[4889]: I1128 07:10:50.400471 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpbg6\" (UniqueName: \"kubernetes.io/projected/669135df-f9c4-4aab-803b-a1732d33fd42-kube-api-access-lpbg6\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.140403 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"669135df-f9c4-4aab-803b-a1732d33fd42","Type":"ContainerDied","Data":"76d63bf064f064b8b3bf20a779d7815de763eb9de1c953b60a9f77a86cf26a35"} Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.140441 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.140481 4889 scope.go:117] "RemoveContainer" containerID="e134570a0ad3f8dbe759ad79b9ade7104410ff788a74c751508d91c8c00c60b1" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.186233 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.198290 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.206501 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:51 crc kubenswrapper[4889]: E1128 07:10:51.206894 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669135df-f9c4-4aab-803b-a1732d33fd42" containerName="nova-scheduler-scheduler" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.206906 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="669135df-f9c4-4aab-803b-a1732d33fd42" containerName="nova-scheduler-scheduler" Nov 28 07:10:51 crc kubenswrapper[4889]: E1128 07:10:51.206932 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3ba7d9-162a-4393-804b-0713bcc88a9c" containerName="nova-manage" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.206937 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3ba7d9-162a-4393-804b-0713bcc88a9c" containerName="nova-manage" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.207113 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3ba7d9-162a-4393-804b-0713bcc88a9c" containerName="nova-manage" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.207132 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="669135df-f9c4-4aab-803b-a1732d33fd42" containerName="nova-scheduler-scheduler" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.207764 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.210572 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.214623 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.316882 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4kx8\" (UniqueName: \"kubernetes.io/projected/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-kube-api-access-g4kx8\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.317077 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.317108 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-config-data\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.342844 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="669135df-f9c4-4aab-803b-a1732d33fd42" path="/var/lib/kubelet/pods/669135df-f9c4-4aab-803b-a1732d33fd42/volumes" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.419422 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.419474 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-config-data\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.419593 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4kx8\" (UniqueName: \"kubernetes.io/projected/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-kube-api-access-g4kx8\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.425903 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-config-data\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.427026 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.438093 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4kx8\" (UniqueName: \"kubernetes.io/projected/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-kube-api-access-g4kx8\") pod \"nova-scheduler-0\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.525151 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.559645 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": read tcp 10.217.0.2:43470->10.217.0.187:8775: read: connection reset by peer" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.559876 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": read tcp 10.217.0.2:43474->10.217.0.187:8775: read: connection reset by peer" Nov 28 07:10:51 crc kubenswrapper[4889]: I1128 07:10:51.984966 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.024671 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.141178 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-config-data\") pod \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.141684 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcq9b\" (UniqueName: \"kubernetes.io/projected/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-kube-api-access-hcq9b\") pod \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.141825 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-combined-ca-bundle\") pod \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.141875 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-nova-metadata-tls-certs\") pod \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.141951 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-logs\") pod \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\" (UID: \"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec\") " Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.142890 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-logs" (OuterVolumeSpecName: "logs") pod "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" (UID: "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.143519 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.151153 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-kube-api-access-hcq9b" (OuterVolumeSpecName: "kube-api-access-hcq9b") pod "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" (UID: "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec"). InnerVolumeSpecName "kube-api-access-hcq9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.158326 4889 generic.go:334] "Generic (PLEG): container finished" podID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerID="1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26" exitCode=0 Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.158413 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec","Type":"ContainerDied","Data":"1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26"} Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.158513 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec","Type":"ContainerDied","Data":"d3934a11d20b8be1f0231494bf8f973ae2fb0457e5875b21cc4a8c34d09c8a42"} Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.158536 4889 scope.go:117] "RemoveContainer" containerID="1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.158692 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.161415 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22942b26-7d2f-4a77-9d97-b7bd457dcfe7","Type":"ContainerStarted","Data":"a44fc71b7cbae1df53405bd997c2a412e2c915a79d9f715611c9ef9c5556ff53"} Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.169641 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" (UID: "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.171260 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-config-data" (OuterVolumeSpecName: "config-data") pod "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" (UID: "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.181526 4889 scope.go:117] "RemoveContainer" containerID="9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.206460 4889 scope.go:117] "RemoveContainer" containerID="1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26" Nov 28 07:10:52 crc kubenswrapper[4889]: E1128 07:10:52.206898 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26\": container with ID starting with 1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26 not found: ID does not exist" containerID="1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.206934 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26"} err="failed to get container status \"1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26\": rpc error: code = NotFound desc = could not find container \"1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26\": container with ID starting with 1ea1100010591ce3ad3c735ad12b600df70c2c103fb1613275cca92875492d26 not found: ID does not exist" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.206962 4889 scope.go:117] "RemoveContainer" containerID="9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0" Nov 28 07:10:52 crc kubenswrapper[4889]: E1128 07:10:52.207230 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0\": container with ID starting with 9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0 not found: ID does not exist" containerID="9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.207268 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0"} err="failed to get container status \"9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0\": rpc error: code = NotFound desc = could not find container \"9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0\": container with ID starting with 9aac98253ae27d2631de37851542ba7065d62b08ea5708071983cdfcadfc10a0 not found: ID does not exist" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.223963 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" (UID: "854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.245065 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcq9b\" (UniqueName: \"kubernetes.io/projected/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-kube-api-access-hcq9b\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.245104 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.245119 4889 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.245133 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.510385 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.529395 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.542168 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:52 crc kubenswrapper[4889]: E1128 07:10:52.542644 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-log" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.542667 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-log" Nov 28 07:10:52 crc kubenswrapper[4889]: E1128 07:10:52.542744 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-metadata" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.542757 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-metadata" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.542975 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-metadata" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.543008 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" containerName="nova-metadata-log" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.544206 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.547769 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.551804 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.553728 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.654419 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.654475 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56d3b5d-d634-47f9-b252-1437066f06e8-logs\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.654736 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.654810 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-579d4\" (UniqueName: \"kubernetes.io/projected/c56d3b5d-d634-47f9-b252-1437066f06e8-kube-api-access-579d4\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.654973 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-config-data\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.756982 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.757028 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56d3b5d-d634-47f9-b252-1437066f06e8-logs\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.757106 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.757136 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579d4\" (UniqueName: \"kubernetes.io/projected/c56d3b5d-d634-47f9-b252-1437066f06e8-kube-api-access-579d4\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.757188 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-config-data\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.757722 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56d3b5d-d634-47f9-b252-1437066f06e8-logs\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.760888 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.761384 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.764828 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-config-data\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.773269 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579d4\" (UniqueName: \"kubernetes.io/projected/c56d3b5d-d634-47f9-b252-1437066f06e8-kube-api-access-579d4\") pod \"nova-metadata-0\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " pod="openstack/nova-metadata-0" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.785969 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.786011 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.838257 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:52 crc kubenswrapper[4889]: I1128 07:10:52.870656 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:10:53 crc kubenswrapper[4889]: I1128 07:10:53.178136 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22942b26-7d2f-4a77-9d97-b7bd457dcfe7","Type":"ContainerStarted","Data":"288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743"} Nov 28 07:10:53 crc kubenswrapper[4889]: I1128 07:10:53.217102 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.217076919 podStartE2EDuration="2.217076919s" podCreationTimestamp="2025-11-28 07:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:53.206541707 +0000 UTC m=+1376.176775882" watchObservedRunningTime="2025-11-28 07:10:53.217076919 +0000 UTC m=+1376.187311074" Nov 28 07:10:53 crc kubenswrapper[4889]: I1128 07:10:53.234103 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:53 crc kubenswrapper[4889]: I1128 07:10:53.292754 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wm65j"] Nov 28 07:10:53 crc kubenswrapper[4889]: I1128 07:10:53.300820 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:10:53 crc kubenswrapper[4889]: I1128 07:10:53.344090 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec" path="/var/lib/kubelet/pods/854cbddb-2c79-4ec7-ad0f-7f8cb06d76ec/volumes" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.189768 4889 generic.go:334] "Generic (PLEG): container finished" podID="a3e44fe1-819e-47c5-b28e-737474eac475" containerID="a7e79cfbb8ab4fd42fccbf123ae90d0b8a7b68059f05ba2ab0d416261563ed69" exitCode=0 Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.189814 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3e44fe1-819e-47c5-b28e-737474eac475","Type":"ContainerDied","Data":"a7e79cfbb8ab4fd42fccbf123ae90d0b8a7b68059f05ba2ab0d416261563ed69"} Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.191937 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56d3b5d-d634-47f9-b252-1437066f06e8","Type":"ContainerStarted","Data":"e42c6a2fac386f68867d9c6f7a7a339fe2bac4979ffa2c5787f9e179f30a3979"} Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.194284 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56d3b5d-d634-47f9-b252-1437066f06e8","Type":"ContainerStarted","Data":"70a438098fb583b36a3abbf2efb8b6bae09e688802ef7192c39ddcd0168358cb"} Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.644378 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.693816 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-public-tls-certs\") pod \"a3e44fe1-819e-47c5-b28e-737474eac475\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.693918 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-internal-tls-certs\") pod \"a3e44fe1-819e-47c5-b28e-737474eac475\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.693972 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e44fe1-819e-47c5-b28e-737474eac475-logs\") pod \"a3e44fe1-819e-47c5-b28e-737474eac475\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.693994 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-combined-ca-bundle\") pod \"a3e44fe1-819e-47c5-b28e-737474eac475\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.694033 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p586g\" (UniqueName: \"kubernetes.io/projected/a3e44fe1-819e-47c5-b28e-737474eac475-kube-api-access-p586g\") pod \"a3e44fe1-819e-47c5-b28e-737474eac475\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.694064 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-config-data\") pod \"a3e44fe1-819e-47c5-b28e-737474eac475\" (UID: \"a3e44fe1-819e-47c5-b28e-737474eac475\") " Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.694570 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3e44fe1-819e-47c5-b28e-737474eac475-logs" (OuterVolumeSpecName: "logs") pod "a3e44fe1-819e-47c5-b28e-737474eac475" (UID: "a3e44fe1-819e-47c5-b28e-737474eac475"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.698801 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e44fe1-819e-47c5-b28e-737474eac475-kube-api-access-p586g" (OuterVolumeSpecName: "kube-api-access-p586g") pod "a3e44fe1-819e-47c5-b28e-737474eac475" (UID: "a3e44fe1-819e-47c5-b28e-737474eac475"). InnerVolumeSpecName "kube-api-access-p586g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.721513 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-config-data" (OuterVolumeSpecName: "config-data") pod "a3e44fe1-819e-47c5-b28e-737474eac475" (UID: "a3e44fe1-819e-47c5-b28e-737474eac475"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.734542 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3e44fe1-819e-47c5-b28e-737474eac475" (UID: "a3e44fe1-819e-47c5-b28e-737474eac475"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.742825 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a3e44fe1-819e-47c5-b28e-737474eac475" (UID: "a3e44fe1-819e-47c5-b28e-737474eac475"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.748151 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a3e44fe1-819e-47c5-b28e-737474eac475" (UID: "a3e44fe1-819e-47c5-b28e-737474eac475"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.796209 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.796241 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.796251 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e44fe1-819e-47c5-b28e-737474eac475-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.796261 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.796269 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p586g\" (UniqueName: \"kubernetes.io/projected/a3e44fe1-819e-47c5-b28e-737474eac475-kube-api-access-p586g\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:54 crc kubenswrapper[4889]: I1128 07:10:54.796280 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e44fe1-819e-47c5-b28e-737474eac475-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.220946 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56d3b5d-d634-47f9-b252-1437066f06e8","Type":"ContainerStarted","Data":"567f961e244cb59c92bd5c9c282ae20876453ed39721849a5bc4edf9bc1b69a8"} Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.230262 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3e44fe1-819e-47c5-b28e-737474eac475","Type":"ContainerDied","Data":"6e6459c7ebea20344bd802249b6f12f846fff30f039848bbb75751e5efa326e9"} Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.230340 4889 scope.go:117] "RemoveContainer" containerID="a7e79cfbb8ab4fd42fccbf123ae90d0b8a7b68059f05ba2ab0d416261563ed69" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.230283 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.230402 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wm65j" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerName="registry-server" containerID="cri-o://78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79" gracePeriod=2 Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.261173 4889 scope.go:117] "RemoveContainer" containerID="0e4b899f7f214990977f6362d05ebc0bdac7bd8b89c56c2811cac2057ab5830a" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.267002 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.26698154 podStartE2EDuration="3.26698154s" podCreationTimestamp="2025-11-28 07:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:55.258479318 +0000 UTC m=+1378.228713493" watchObservedRunningTime="2025-11-28 07:10:55.26698154 +0000 UTC m=+1378.237215695" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.295613 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.309031 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.321820 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:55 crc kubenswrapper[4889]: E1128 07:10:55.322288 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-log" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.322305 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-log" Nov 28 07:10:55 crc kubenswrapper[4889]: E1128 07:10:55.322312 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-api" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.322320 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-api" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.322495 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-log" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.322522 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" containerName="nova-api-api" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.323628 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.326351 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.326466 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.326542 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.355156 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e44fe1-819e-47c5-b28e-737474eac475" path="/var/lib/kubelet/pods/a3e44fe1-819e-47c5-b28e-737474eac475/volumes" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.355731 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.418480 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660e4f27-4ee4-43d9-b155-7132c78e9a21-logs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.418690 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49j9h\" (UniqueName: \"kubernetes.io/projected/660e4f27-4ee4-43d9-b155-7132c78e9a21-kube-api-access-49j9h\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.418915 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-public-tls-certs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.419062 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-config-data\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.419134 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.419288 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-internal-tls-certs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.520812 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-config-data\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.520867 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.520917 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-internal-tls-certs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.520961 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660e4f27-4ee4-43d9-b155-7132c78e9a21-logs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.521654 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660e4f27-4ee4-43d9-b155-7132c78e9a21-logs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.521853 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49j9h\" (UniqueName: \"kubernetes.io/projected/660e4f27-4ee4-43d9-b155-7132c78e9a21-kube-api-access-49j9h\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.521909 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-public-tls-certs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.527576 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-internal-tls-certs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.528429 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-config-data\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.529007 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.529452 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-public-tls-certs\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.540883 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49j9h\" (UniqueName: \"kubernetes.io/projected/660e4f27-4ee4-43d9-b155-7132c78e9a21-kube-api-access-49j9h\") pod \"nova-api-0\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.663155 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.780919 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.826067 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzmrq\" (UniqueName: \"kubernetes.io/projected/ae6139fc-fc78-4423-8a8d-d526220b6d2a-kube-api-access-nzmrq\") pod \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.826224 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-utilities\") pod \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.826367 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-catalog-content\") pod \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\" (UID: \"ae6139fc-fc78-4423-8a8d-d526220b6d2a\") " Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.830069 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-utilities" (OuterVolumeSpecName: "utilities") pod "ae6139fc-fc78-4423-8a8d-d526220b6d2a" (UID: "ae6139fc-fc78-4423-8a8d-d526220b6d2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.834067 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6139fc-fc78-4423-8a8d-d526220b6d2a-kube-api-access-nzmrq" (OuterVolumeSpecName: "kube-api-access-nzmrq") pod "ae6139fc-fc78-4423-8a8d-d526220b6d2a" (UID: "ae6139fc-fc78-4423-8a8d-d526220b6d2a"). InnerVolumeSpecName "kube-api-access-nzmrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.929101 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzmrq\" (UniqueName: \"kubernetes.io/projected/ae6139fc-fc78-4423-8a8d-d526220b6d2a-kube-api-access-nzmrq\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.929145 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:55 crc kubenswrapper[4889]: I1128 07:10:55.932216 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae6139fc-fc78-4423-8a8d-d526220b6d2a" (UID: "ae6139fc-fc78-4423-8a8d-d526220b6d2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.031027 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6139fc-fc78-4423-8a8d-d526220b6d2a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.109519 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:10:56 crc kubenswrapper[4889]: W1128 07:10:56.113281 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod660e4f27_4ee4_43d9_b155_7132c78e9a21.slice/crio-fbe58493cac7473b311c7cdb030d60f9f192f868912c8e5fe7f69056cf48079c WatchSource:0}: Error finding container fbe58493cac7473b311c7cdb030d60f9f192f868912c8e5fe7f69056cf48079c: Status 404 returned error can't find the container with id fbe58493cac7473b311c7cdb030d60f9f192f868912c8e5fe7f69056cf48079c Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.239955 4889 generic.go:334] "Generic (PLEG): container finished" podID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerID="78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79" exitCode=0 Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.240023 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wm65j" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.240037 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wm65j" event={"ID":"ae6139fc-fc78-4423-8a8d-d526220b6d2a","Type":"ContainerDied","Data":"78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79"} Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.240104 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wm65j" event={"ID":"ae6139fc-fc78-4423-8a8d-d526220b6d2a","Type":"ContainerDied","Data":"6aa0df901c4f4b231cdf9650d0254e4b18bb6a80bb221ff585fa59269eec4b4f"} Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.240130 4889 scope.go:117] "RemoveContainer" containerID="78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.242449 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660e4f27-4ee4-43d9-b155-7132c78e9a21","Type":"ContainerStarted","Data":"fbe58493cac7473b311c7cdb030d60f9f192f868912c8e5fe7f69056cf48079c"} Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.260832 4889 scope.go:117] "RemoveContainer" containerID="ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.278188 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wm65j"] Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.286856 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wm65j"] Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.294521 4889 scope.go:117] "RemoveContainer" containerID="512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.317136 4889 scope.go:117] "RemoveContainer" containerID="78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79" Nov 28 07:10:56 crc kubenswrapper[4889]: E1128 07:10:56.317642 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79\": container with ID starting with 78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79 not found: ID does not exist" containerID="78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.317856 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79"} err="failed to get container status \"78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79\": rpc error: code = NotFound desc = could not find container \"78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79\": container with ID starting with 78d6518693e2c3d363fc98de351d703ab8e0016b05acd7b4ccca1d1c396d3c79 not found: ID does not exist" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.317962 4889 scope.go:117] "RemoveContainer" containerID="ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3" Nov 28 07:10:56 crc kubenswrapper[4889]: E1128 07:10:56.318537 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3\": container with ID starting with ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3 not found: ID does not exist" containerID="ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.318578 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3"} err="failed to get container status \"ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3\": rpc error: code = NotFound desc = could not find container \"ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3\": container with ID starting with ff96e310288a497cfd8078b9492d019596159530542e1ccc1b7caf4e6881cdb3 not found: ID does not exist" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.318608 4889 scope.go:117] "RemoveContainer" containerID="512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8" Nov 28 07:10:56 crc kubenswrapper[4889]: E1128 07:10:56.320931 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8\": container with ID starting with 512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8 not found: ID does not exist" containerID="512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.320976 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8"} err="failed to get container status \"512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8\": rpc error: code = NotFound desc = could not find container \"512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8\": container with ID starting with 512407d352982ab6b4c04fc643e577181e1da2e2b578e71fc252d9f7e55010b8 not found: ID does not exist" Nov 28 07:10:56 crc kubenswrapper[4889]: I1128 07:10:56.525663 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 07:10:57 crc kubenswrapper[4889]: I1128 07:10:57.254921 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660e4f27-4ee4-43d9-b155-7132c78e9a21","Type":"ContainerStarted","Data":"acb5766d5a413069d801db205a476d18781a4f594a9ce2359a0ea46664f3fc6f"} Nov 28 07:10:57 crc kubenswrapper[4889]: I1128 07:10:57.255236 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660e4f27-4ee4-43d9-b155-7132c78e9a21","Type":"ContainerStarted","Data":"aeb659f950bddd00ce66f14e3cdebf2bdbc3d4975bbd35d09c2685e724f6146c"} Nov 28 07:10:57 crc kubenswrapper[4889]: I1128 07:10:57.284549 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.284529634 podStartE2EDuration="2.284529634s" podCreationTimestamp="2025-11-28 07:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:57.275727145 +0000 UTC m=+1380.245961300" watchObservedRunningTime="2025-11-28 07:10:57.284529634 +0000 UTC m=+1380.254763799" Nov 28 07:10:57 crc kubenswrapper[4889]: I1128 07:10:57.344688 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" path="/var/lib/kubelet/pods/ae6139fc-fc78-4423-8a8d-d526220b6d2a/volumes" Nov 28 07:10:57 crc kubenswrapper[4889]: I1128 07:10:57.871293 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:10:57 crc kubenswrapper[4889]: I1128 07:10:57.871373 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:11:01 crc kubenswrapper[4889]: I1128 07:11:01.525977 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 07:11:01 crc kubenswrapper[4889]: I1128 07:11:01.549920 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 07:11:02 crc kubenswrapper[4889]: I1128 07:11:02.323616 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 07:11:02 crc kubenswrapper[4889]: I1128 07:11:02.871410 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 07:11:02 crc kubenswrapper[4889]: I1128 07:11:02.871508 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 07:11:03 crc kubenswrapper[4889]: I1128 07:11:03.886831 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:11:03 crc kubenswrapper[4889]: I1128 07:11:03.886849 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:11:05 crc kubenswrapper[4889]: I1128 07:11:05.663652 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:11:05 crc kubenswrapper[4889]: I1128 07:11:05.663719 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:11:06 crc kubenswrapper[4889]: I1128 07:11:06.384991 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 07:11:06 crc kubenswrapper[4889]: I1128 07:11:06.673958 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:11:06 crc kubenswrapper[4889]: I1128 07:11:06.674017 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:11:12 crc kubenswrapper[4889]: I1128 07:11:12.875984 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 07:11:12 crc kubenswrapper[4889]: I1128 07:11:12.879022 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 07:11:12 crc kubenswrapper[4889]: I1128 07:11:12.880305 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 07:11:13 crc kubenswrapper[4889]: I1128 07:11:13.412578 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 07:11:15 crc kubenswrapper[4889]: I1128 07:11:15.671379 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 07:11:15 crc kubenswrapper[4889]: I1128 07:11:15.672106 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 07:11:15 crc kubenswrapper[4889]: I1128 07:11:15.672559 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 07:11:15 crc kubenswrapper[4889]: I1128 07:11:15.672643 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 07:11:15 crc kubenswrapper[4889]: I1128 07:11:15.681349 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 07:11:15 crc kubenswrapper[4889]: I1128 07:11:15.683813 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 07:11:28 crc kubenswrapper[4889]: I1128 07:11:28.783485 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:11:28 crc kubenswrapper[4889]: I1128 07:11:28.784219 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.298509 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.299274 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="9a763079-28f4-4dd4-8ad8-96bc23a29fb8" containerName="openstackclient" containerID="cri-o://3c5f80a48b3ca25b6b7f1a49e74a9bac715cb48a9bb541e28b85cc92405a168e" gracePeriod=2 Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.317789 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.426927 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder6a1b-account-delete-bdn66"] Nov 28 07:11:34 crc kubenswrapper[4889]: E1128 07:11:34.427358 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a763079-28f4-4dd4-8ad8-96bc23a29fb8" containerName="openstackclient" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.427374 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a763079-28f4-4dd4-8ad8-96bc23a29fb8" containerName="openstackclient" Nov 28 07:11:34 crc kubenswrapper[4889]: E1128 07:11:34.427385 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerName="extract-content" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.427392 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerName="extract-content" Nov 28 07:11:34 crc kubenswrapper[4889]: E1128 07:11:34.427422 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerName="registry-server" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.427430 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerName="registry-server" Nov 28 07:11:34 crc kubenswrapper[4889]: E1128 07:11:34.427443 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerName="extract-utilities" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.427449 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerName="extract-utilities" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.427618 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a763079-28f4-4dd4-8ad8-96bc23a29fb8" containerName="openstackclient" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.427648 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6139fc-fc78-4423-8a8d-d526220b6d2a" containerName="registry-server" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.428303 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.456176 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.465266 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder6a1b-account-delete-bdn66"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.533751 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg5gm\" (UniqueName: \"kubernetes.io/projected/f07c52ed-8e06-4dc1-8400-09a9dba35926-kube-api-access-tg5gm\") pod \"cinder6a1b-account-delete-bdn66\" (UID: \"f07c52ed-8e06-4dc1-8400-09a9dba35926\") " pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.533816 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f07c52ed-8e06-4dc1-8400-09a9dba35926-operator-scripts\") pod \"cinder6a1b-account-delete-bdn66\" (UID: \"f07c52ed-8e06-4dc1-8400-09a9dba35926\") " pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.611095 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.611718 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" containerName="openstack-network-exporter" containerID="cri-o://5eb83b765e57ee122fbe625e86ad95bb06d3206e2ca82bde40a2997e84a961fb" gracePeriod=300 Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.636040 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg5gm\" (UniqueName: \"kubernetes.io/projected/f07c52ed-8e06-4dc1-8400-09a9dba35926-kube-api-access-tg5gm\") pod \"cinder6a1b-account-delete-bdn66\" (UID: \"f07c52ed-8e06-4dc1-8400-09a9dba35926\") " pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.636109 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f07c52ed-8e06-4dc1-8400-09a9dba35926-operator-scripts\") pod \"cinder6a1b-account-delete-bdn66\" (UID: \"f07c52ed-8e06-4dc1-8400-09a9dba35926\") " pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.638439 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f07c52ed-8e06-4dc1-8400-09a9dba35926-operator-scripts\") pod \"cinder6a1b-account-delete-bdn66\" (UID: \"f07c52ed-8e06-4dc1-8400-09a9dba35926\") " pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:34 crc kubenswrapper[4889]: E1128 07:11:34.638499 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:34 crc kubenswrapper[4889]: E1128 07:11:34.638537 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data podName:9b744978-786e-4ab0-8a5c-1e8e3f9a2809 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:35.13852344 +0000 UTC m=+1418.108757595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data") pod "rabbitmq-cell1-server-0" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.654089 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement75e4-account-delete-x6dpp"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.670483 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.710811 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg5gm\" (UniqueName: \"kubernetes.io/projected/f07c52ed-8e06-4dc1-8400-09a9dba35926-kube-api-access-tg5gm\") pod \"cinder6a1b-account-delete-bdn66\" (UID: \"f07c52ed-8e06-4dc1-8400-09a9dba35926\") " pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.735780 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement75e4-account-delete-x6dpp"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.752434 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-q5vz8"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.781861 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.803570 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-q5vz8"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.841908 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5zc\" (UniqueName: \"kubernetes.io/projected/8da3d6a4-5874-4305-b358-9765720b68f9-kube-api-access-pg5zc\") pod \"placement75e4-account-delete-x6dpp\" (UID: \"8da3d6a4-5874-4305-b358-9765720b68f9\") " pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.841988 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da3d6a4-5874-4305-b358-9765720b68f9-operator-scripts\") pod \"placement75e4-account-delete-x6dpp\" (UID: \"8da3d6a4-5874-4305-b358-9765720b68f9\") " pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.891838 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-6spsr"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.925767 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-6spsr"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.945628 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5zc\" (UniqueName: \"kubernetes.io/projected/8da3d6a4-5874-4305-b358-9765720b68f9-kube-api-access-pg5zc\") pod \"placement75e4-account-delete-x6dpp\" (UID: \"8da3d6a4-5874-4305-b358-9765720b68f9\") " pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.945719 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da3d6a4-5874-4305-b358-9765720b68f9-operator-scripts\") pod \"placement75e4-account-delete-x6dpp\" (UID: \"8da3d6a4-5874-4305-b358-9765720b68f9\") " pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.946824 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da3d6a4-5874-4305-b358-9765720b68f9-operator-scripts\") pod \"placement75e4-account-delete-x6dpp\" (UID: \"8da3d6a4-5874-4305-b358-9765720b68f9\") " pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.949269 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.949612 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="ovn-northd" containerID="cri-o://a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" gracePeriod=30 Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.950186 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="openstack-network-exporter" containerID="cri-o://501a4b31916c81c75b98f9162dc9d571bda2ac1eeda86e0c705c757893b500ab" gracePeriod=30 Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.983765 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5zc\" (UniqueName: \"kubernetes.io/projected/8da3d6a4-5874-4305-b358-9765720b68f9-kube-api-access-pg5zc\") pod \"placement75e4-account-delete-x6dpp\" (UID: \"8da3d6a4-5874-4305-b358-9765720b68f9\") " pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.984148 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican9d4e-account-delete-w2cq4"] Nov 28 07:11:34 crc kubenswrapper[4889]: I1128 07:11:34.985446 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.001611 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican9d4e-account-delete-w2cq4"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.043647 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" containerName="ovsdbserver-sb" containerID="cri-o://9c0bdc3b1d5da3bad6cec36b156ddd5f2770493a35c89a25fb3741002c171edc" gracePeriod=300 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.055648 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.099898 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glancef2a1-account-delete-wwplw"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.110930 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.149825 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnswm\" (UniqueName: \"kubernetes.io/projected/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-kube-api-access-fnswm\") pod \"barbican9d4e-account-delete-w2cq4\" (UID: \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\") " pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.149886 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-operator-scripts\") pod \"barbican9d4e-account-delete-w2cq4\" (UID: \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\") " pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:35 crc kubenswrapper[4889]: E1128 07:11:35.150380 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:35 crc kubenswrapper[4889]: E1128 07:11:35.150441 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data podName:9b744978-786e-4ab0-8a5c-1e8e3f9a2809 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:36.150426631 +0000 UTC m=+1419.120660786 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data") pod "rabbitmq-cell1-server-0" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.240555 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glancef2a1-account-delete-wwplw"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.257367 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnswm\" (UniqueName: \"kubernetes.io/projected/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-kube-api-access-fnswm\") pod \"barbican9d4e-account-delete-w2cq4\" (UID: \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\") " pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.257440 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-operator-scripts\") pod \"barbican9d4e-account-delete-w2cq4\" (UID: \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\") " pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.259099 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d7e485-1911-4206-bf42-9a57a855a880-operator-scripts\") pod \"glancef2a1-account-delete-wwplw\" (UID: \"32d7e485-1911-4206-bf42-9a57a855a880\") " pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.259275 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt9cj\" (UniqueName: \"kubernetes.io/projected/32d7e485-1911-4206-bf42-9a57a855a880-kube-api-access-lt9cj\") pod \"glancef2a1-account-delete-wwplw\" (UID: \"32d7e485-1911-4206-bf42-9a57a855a880\") " pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.267217 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-operator-scripts\") pod \"barbican9d4e-account-delete-w2cq4\" (UID: \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\") " pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.289330 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron08e6-account-delete-rzzxh"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.316245 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnswm\" (UniqueName: \"kubernetes.io/projected/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-kube-api-access-fnswm\") pod \"barbican9d4e-account-delete-w2cq4\" (UID: \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\") " pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.328771 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.373242 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d7e485-1911-4206-bf42-9a57a855a880-operator-scripts\") pod \"glancef2a1-account-delete-wwplw\" (UID: \"32d7e485-1911-4206-bf42-9a57a855a880\") " pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.373493 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt9cj\" (UniqueName: \"kubernetes.io/projected/32d7e485-1911-4206-bf42-9a57a855a880-kube-api-access-lt9cj\") pod \"glancef2a1-account-delete-wwplw\" (UID: \"32d7e485-1911-4206-bf42-9a57a855a880\") " pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.374195 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d7e485-1911-4206-bf42-9a57a855a880-operator-scripts\") pod \"glancef2a1-account-delete-wwplw\" (UID: \"32d7e485-1911-4206-bf42-9a57a855a880\") " pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.423310 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.438433 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt9cj\" (UniqueName: \"kubernetes.io/projected/32d7e485-1911-4206-bf42-9a57a855a880-kube-api-access-lt9cj\") pod \"glancef2a1-account-delete-wwplw\" (UID: \"32d7e485-1911-4206-bf42-9a57a855a880\") " pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.476211 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b4p6\" (UniqueName: \"kubernetes.io/projected/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-kube-api-access-8b4p6\") pod \"neutron08e6-account-delete-rzzxh\" (UID: \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\") " pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.476297 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-operator-scripts\") pod \"neutron08e6-account-delete-rzzxh\" (UID: \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\") " pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.477123 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a51e5e-b005-4d01-b0a3-86f27d671c32" path="/var/lib/kubelet/pods/76a51e5e-b005-4d01-b0a3-86f27d671c32/volumes" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.481846 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851c4202-ebf1-44df-97d1-4c9b9bfd1fba" path="/var/lib/kubelet/pods/851c4202-ebf1-44df-97d1-4c9b9bfd1fba/volumes" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.482599 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron08e6-account-delete-rzzxh"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.482630 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-xg58q"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.487617 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-d2mhk"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.487910 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-xg58q" podUID="fd5deb3d-df4a-48e4-844b-35247485825a" containerName="openstack-network-exporter" containerID="cri-o://3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662" gracePeriod=30 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.527127 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.535598 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.581639 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b4p6\" (UniqueName: \"kubernetes.io/projected/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-kube-api-access-8b4p6\") pod \"neutron08e6-account-delete-rzzxh\" (UID: \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\") " pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.581893 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-operator-scripts\") pod \"neutron08e6-account-delete-rzzxh\" (UID: \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\") " pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:35 crc kubenswrapper[4889]: E1128 07:11:35.582895 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:11:35 crc kubenswrapper[4889]: E1128 07:11:35.582934 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data podName:90d501b3-ad2c-4fb8-814d-411dc2a11f20 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:36.082920603 +0000 UTC m=+1419.053154748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data") pod "rabbitmq-server-0" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20") : configmap "rabbitmq-config-data" not found Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.593733 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-operator-scripts\") pod \"neutron08e6-account-delete-rzzxh\" (UID: \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\") " pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.600159 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dlfmr"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.636256 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b4p6\" (UniqueName: \"kubernetes.io/projected/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-kube-api-access-8b4p6\") pod \"neutron08e6-account-delete-rzzxh\" (UID: \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\") " pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.636347 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell16403-account-delete-l5wmp"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.638912 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.655683 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell16403-account-delete-l5wmp"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.667999 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-578vb"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.676790 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.677326 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-578vb"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.686795 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts\") pod \"novacell16403-account-delete-l5wmp\" (UID: \"afb1ca65-412b-4179-ac61-4904d9f6e001\") " pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.686857 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjnj\" (UniqueName: \"kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj\") pod \"novacell16403-account-delete-l5wmp\" (UID: \"afb1ca65-412b-4179-ac61-4904d9f6e001\") " pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.693084 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8cb4f"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.718552 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8cb4f"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.799458 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi670d-account-delete-q5q9k"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.800671 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.802414 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts\") pod \"novacell16403-account-delete-l5wmp\" (UID: \"afb1ca65-412b-4179-ac61-4904d9f6e001\") " pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.803817 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjnj\" (UniqueName: \"kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj\") pod \"novacell16403-account-delete-l5wmp\" (UID: \"afb1ca65-412b-4179-ac61-4904d9f6e001\") " pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.809898 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts\") pod \"novacell16403-account-delete-l5wmp\" (UID: \"afb1ca65-412b-4179-ac61-4904d9f6e001\") " pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:35 crc kubenswrapper[4889]: E1128 07:11:35.823092 4889 projected.go:194] Error preparing data for projected volume kube-api-access-hqjnj for pod openstack/novacell16403-account-delete-l5wmp: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Nov 28 07:11:35 crc kubenswrapper[4889]: E1128 07:11:35.823167 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj podName:afb1ca65-412b-4179-ac61-4904d9f6e001 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:36.323146071 +0000 UTC m=+1419.293380226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqjnj" (UniqueName: "kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj") pod "novacell16403-account-delete-l5wmp" (UID: "afb1ca65-412b-4179-ac61-4904d9f6e001") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.825975 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c960973-a307-4a8a-9fe6-885450c512e0/ovsdbserver-sb/0.log" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.826042 4889 generic.go:334] "Generic (PLEG): container finished" podID="7c960973-a307-4a8a-9fe6-885450c512e0" containerID="5eb83b765e57ee122fbe625e86ad95bb06d3206e2ca82bde40a2997e84a961fb" exitCode=2 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.826062 4889 generic.go:334] "Generic (PLEG): container finished" podID="7c960973-a307-4a8a-9fe6-885450c512e0" containerID="9c0bdc3b1d5da3bad6cec36b156ddd5f2770493a35c89a25fb3741002c171edc" exitCode=143 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.826159 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c960973-a307-4a8a-9fe6-885450c512e0","Type":"ContainerDied","Data":"5eb83b765e57ee122fbe625e86ad95bb06d3206e2ca82bde40a2997e84a961fb"} Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.826184 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c960973-a307-4a8a-9fe6-885450c512e0","Type":"ContainerDied","Data":"9c0bdc3b1d5da3bad6cec36b156ddd5f2770493a35c89a25fb3741002c171edc"} Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.856498 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi670d-account-delete-q5q9k"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.876739 4889 generic.go:334] "Generic (PLEG): container finished" podID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerID="501a4b31916c81c75b98f9162dc9d571bda2ac1eeda86e0c705c757893b500ab" exitCode=2 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.876787 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"972b231d-adb2-4355-ae5b-57fc0cc642f4","Type":"ContainerDied","Data":"501a4b31916c81c75b98f9162dc9d571bda2ac1eeda86e0c705c757893b500ab"} Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.888741 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.888988 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerName="cinder-scheduler" containerID="cri-o://49e67bb5951ea35d2e035af45fe412854503885e3be636b75ae99068b967486a" gracePeriod=30 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.889387 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerName="probe" containerID="cri-o://f75c2d3e942c4126e39bbb3c030aefb4924d2c9473c25ea74d8b3c3218308e58" gracePeriod=30 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.913618 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9m99\" (UniqueName: \"kubernetes.io/projected/00c7d31d-27e7-45cc-abb6-bae21de9135f-kube-api-access-h9m99\") pod \"novaapi670d-account-delete-q5q9k\" (UID: \"00c7d31d-27e7-45cc-abb6-bae21de9135f\") " pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.913744 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00c7d31d-27e7-45cc-abb6-bae21de9135f-operator-scripts\") pod \"novaapi670d-account-delete-q5q9k\" (UID: \"00c7d31d-27e7-45cc-abb6-bae21de9135f\") " pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:35 crc kubenswrapper[4889]: E1128 07:11:35.916463 4889 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Nov 28 07:11:35 crc kubenswrapper[4889]: E1128 07:11:35.916524 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts podName:afb1ca65-412b-4179-ac61-4904d9f6e001 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:36.416506879 +0000 UTC m=+1419.386741034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts") pod "novacell16403-account-delete-l5wmp" (UID: "afb1ca65-412b-4179-ac61-4904d9f6e001") : configmap "openstack-cell1-scripts" not found Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.916770 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8fc4ccc9-wc58j"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.917042 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" podUID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" containerName="dnsmasq-dns" containerID="cri-o://2365c7b49a5eac186167c60fde0c3ed33a799576881ae61606acd63b56a773ae" gracePeriod=10 Nov 28 07:11:35 crc kubenswrapper[4889]: W1128 07:11:35.924412 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf07c52ed_8e06_4dc1_8400_09a9dba35926.slice/crio-ccef28d07ac74be76db68682d8aa2b359ad81156f82d5f979cd6e395ef588cae WatchSource:0}: Error finding container ccef28d07ac74be76db68682d8aa2b359ad81156f82d5f979cd6e395ef588cae: Status 404 returned error can't find the container with id ccef28d07ac74be76db68682d8aa2b359ad81156f82d5f979cd6e395ef588cae Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.938014 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5bbc5ddd4-vzclt"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.938546 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5bbc5ddd4-vzclt" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerName="placement-log" containerID="cri-o://ff5c205f4bf58cd1d0ad31c563376d2141a5b307862a95d33a353065a03c5642" gracePeriod=30 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.939059 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5bbc5ddd4-vzclt" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerName="placement-api" containerID="cri-o://916841af475c0d0409c239e605ccdb71c123e2852a495b97c814602f89fea785" gracePeriod=30 Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.953725 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell03d10-account-delete-vhnfs"] Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.956094 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:35 crc kubenswrapper[4889]: I1128 07:11:35.978739 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell03d10-account-delete-vhnfs"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.020380 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9m99\" (UniqueName: \"kubernetes.io/projected/00c7d31d-27e7-45cc-abb6-bae21de9135f-kube-api-access-h9m99\") pod \"novaapi670d-account-delete-q5q9k\" (UID: \"00c7d31d-27e7-45cc-abb6-bae21de9135f\") " pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.020456 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00c7d31d-27e7-45cc-abb6-bae21de9135f-operator-scripts\") pod \"novaapi670d-account-delete-q5q9k\" (UID: \"00c7d31d-27e7-45cc-abb6-bae21de9135f\") " pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.020540 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-operator-scripts\") pod \"novacell03d10-account-delete-vhnfs\" (UID: \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\") " pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.020567 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76kg\" (UniqueName: \"kubernetes.io/projected/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-kube-api-access-w76kg\") pod \"novacell03d10-account-delete-vhnfs\" (UID: \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\") " pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.031366 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.031579 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api-log" containerID="cri-o://5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.032681 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api" containerID="cri-o://0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.033568 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00c7d31d-27e7-45cc-abb6-bae21de9135f-operator-scripts\") pod \"novaapi670d-account-delete-q5q9k\" (UID: \"00c7d31d-27e7-45cc-abb6-bae21de9135f\") " pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.065954 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9m99\" (UniqueName: \"kubernetes.io/projected/00c7d31d-27e7-45cc-abb6-bae21de9135f-kube-api-access-h9m99\") pod \"novaapi670d-account-delete-q5q9k\" (UID: \"00c7d31d-27e7-45cc-abb6-bae21de9135f\") " pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.067011 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.067698 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="a92a932b-ef66-408c-883e-99412a94d0da" containerName="openstack-network-exporter" containerID="cri-o://e38cd97bce0fc8d698d4e44b7375fde620f8a3ee986dc3c97e437a42647d9d7f" gracePeriod=300 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.133136 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-66vh9"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.134611 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-operator-scripts\") pod \"novacell03d10-account-delete-vhnfs\" (UID: \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\") " pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.134674 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w76kg\" (UniqueName: \"kubernetes.io/projected/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-kube-api-access-w76kg\") pod \"novacell03d10-account-delete-vhnfs\" (UID: \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\") " pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.135151 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.135209 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data podName:90d501b3-ad2c-4fb8-814d-411dc2a11f20 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:37.13519209 +0000 UTC m=+1420.105426255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data") pod "rabbitmq-server-0" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20") : configmap "rabbitmq-config-data" not found Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.136355 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-operator-scripts\") pod \"novacell03d10-account-delete-vhnfs\" (UID: \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\") " pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.158129 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="a92a932b-ef66-408c-883e-99412a94d0da" containerName="ovsdbserver-nb" containerID="cri-o://8b167955e43f6529720269cc5280735e5f9b8f62a031ef0fc8db7679214765f7" gracePeriod=300 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.162477 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w76kg\" (UniqueName: \"kubernetes.io/projected/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-kube-api-access-w76kg\") pod \"novacell03d10-account-delete-vhnfs\" (UID: \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\") " pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.175348 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.176089 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-66vh9"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.226579 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-r99w7"] Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.236124 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.236213 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data podName:9b744978-786e-4ab0-8a5c-1e8e3f9a2809 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:38.236198018 +0000 UTC m=+1421.206432163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data") pod "rabbitmq-cell1-server-0" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.260983 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-r99w7"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.279174 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.331699 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pnfxg"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.339359 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjnj\" (UniqueName: \"kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj\") pod \"novacell16403-account-delete-l5wmp\" (UID: \"afb1ca65-412b-4179-ac61-4904d9f6e001\") " pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.340486 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pnfxg"] Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.358050 4889 projected.go:194] Error preparing data for projected volume kube-api-access-hqjnj for pod openstack/novacell16403-account-delete-l5wmp: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.358120 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj podName:afb1ca65-412b-4179-ac61-4904d9f6e001 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:37.358101397 +0000 UTC m=+1420.328335542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqjnj" (UniqueName: "kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj") pod "novacell16403-account-delete-l5wmp" (UID: "afb1ca65-412b-4179-ac61-4904d9f6e001") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.372561 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder6a1b-account-delete-bdn66"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.403094 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.403594 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-server" containerID="cri-o://5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404463 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="swift-recon-cron" containerID="cri-o://cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404535 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="rsync" containerID="cri-o://54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404583 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-expirer" containerID="cri-o://107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404661 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-updater" containerID="cri-o://232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404748 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-auditor" containerID="cri-o://b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404801 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-replicator" containerID="cri-o://f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404842 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-server" containerID="cri-o://4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404898 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-updater" containerID="cri-o://d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404940 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-auditor" containerID="cri-o://12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.404979 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-replicator" containerID="cri-o://e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.405017 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-server" containerID="cri-o://bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.405057 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-reaper" containerID="cri-o://c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.405096 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-auditor" containerID="cri-o://2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.405185 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-replicator" containerID="cri-o://3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.418062 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-j6wjv"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.438994 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-j6wjv"] Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.448640 4889 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.448701 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts podName:afb1ca65-412b-4179-ac61-4904d9f6e001 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:37.448686295 +0000 UTC m=+1420.418920450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts") pod "novacell16403-account-delete-l5wmp" (UID: "afb1ca65-412b-4179-ac61-4904d9f6e001") : configmap "openstack-cell1-scripts" not found Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.482503 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.482890 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerName="glance-log" containerID="cri-o://22318eb16b34523322d3a94ac17704c1b438f84bf7f28f3ecaa09dfd78e54966" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.484097 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerName="glance-httpd" containerID="cri-o://cff416d0a45fbb92ec6800489afd9ccbad8dbac624f5bfcda44035e9258fc559" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.525313 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xg58q_fd5deb3d-df4a-48e4-844b-35247485825a/openstack-network-exporter/0.log" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.525403 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.569772 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.570534 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerName="glance-log" containerID="cri-o://409c5ef01d2ff33efa004111267e8e87bbe31d48936823d35c2588b49a2b67eb" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.571301 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerName="glance-httpd" containerID="cri-o://49402cf027d11e8e350b29757338f93c7461291da6a3603e125d8fc9821c3652" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.577133 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c960973-a307-4a8a-9fe6-885450c512e0/ovsdbserver-sb/0.log" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.580189 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.635622 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c99d75dcc-cgtnj"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.641618 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c99d75dcc-cgtnj" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerName="neutron-api" containerID="cri-o://7ca5f31a155561a771625bbbaea4e69473efa075212d50535e793088359bdafe" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.651682 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c99d75dcc-cgtnj" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerName="neutron-httpd" containerID="cri-o://a93ee07b36d6a14a36fd7e9a347daa5368daca2560780121eb1189c052d07f4b" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.670623 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-metrics-certs-tls-certs\") pod \"fd5deb3d-df4a-48e4-844b-35247485825a\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.670694 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5deb3d-df4a-48e4-844b-35247485825a-config\") pod \"fd5deb3d-df4a-48e4-844b-35247485825a\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671296 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-combined-ca-bundle\") pod \"7c960973-a307-4a8a-9fe6-885450c512e0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671339 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-scripts\") pod \"7c960973-a307-4a8a-9fe6-885450c512e0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671392 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-config\") pod \"7c960973-a307-4a8a-9fe6-885450c512e0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671430 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkkcz\" (UniqueName: \"kubernetes.io/projected/fd5deb3d-df4a-48e4-844b-35247485825a-kube-api-access-kkkcz\") pod \"fd5deb3d-df4a-48e4-844b-35247485825a\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671453 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovn-rundir\") pod \"fd5deb3d-df4a-48e4-844b-35247485825a\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671479 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdbserver-sb-tls-certs\") pod \"7c960973-a307-4a8a-9fe6-885450c512e0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671501 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovs-rundir\") pod \"fd5deb3d-df4a-48e4-844b-35247485825a\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671561 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcksq\" (UniqueName: \"kubernetes.io/projected/7c960973-a307-4a8a-9fe6-885450c512e0-kube-api-access-kcksq\") pod \"7c960973-a307-4a8a-9fe6-885450c512e0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671609 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdb-rundir\") pod \"7c960973-a307-4a8a-9fe6-885450c512e0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671627 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-combined-ca-bundle\") pod \"fd5deb3d-df4a-48e4-844b-35247485825a\" (UID: \"fd5deb3d-df4a-48e4-844b-35247485825a\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671736 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-metrics-certs-tls-certs\") pod \"7c960973-a307-4a8a-9fe6-885450c512e0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.671807 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7c960973-a307-4a8a-9fe6-885450c512e0\" (UID: \"7c960973-a307-4a8a-9fe6-885450c512e0\") " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.672395 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5deb3d-df4a-48e4-844b-35247485825a-config" (OuterVolumeSpecName: "config") pod "fd5deb3d-df4a-48e4-844b-35247485825a" (UID: "fd5deb3d-df4a-48e4-844b-35247485825a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.672482 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "fd5deb3d-df4a-48e4-844b-35247485825a" (UID: "fd5deb3d-df4a-48e4-844b-35247485825a"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.672672 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5deb3d-df4a-48e4-844b-35247485825a-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.672687 4889 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.673331 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-config" (OuterVolumeSpecName: "config") pod "7c960973-a307-4a8a-9fe6-885450c512e0" (UID: "7c960973-a307-4a8a-9fe6-885450c512e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.676415 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-scripts" (OuterVolumeSpecName: "scripts") pod "7c960973-a307-4a8a-9fe6-885450c512e0" (UID: "7c960973-a307-4a8a-9fe6-885450c512e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.676475 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "fd5deb3d-df4a-48e4-844b-35247485825a" (UID: "fd5deb3d-df4a-48e4-844b-35247485825a"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.677984 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "7c960973-a307-4a8a-9fe6-885450c512e0" (UID: "7c960973-a307-4a8a-9fe6-885450c512e0"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.678074 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.679556 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="22942b26-7d2f-4a77-9d97-b7bd457dcfe7" containerName="nova-scheduler-scheduler" containerID="cri-o://288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.681904 4889 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 28 07:11:36 crc kubenswrapper[4889]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 28 07:11:36 crc kubenswrapper[4889]: + source /usr/local/bin/container-scripts/functions Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNBridge=br-int Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNRemote=tcp:localhost:6642 Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNEncapType=geneve Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNAvailabilityZones= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ EnableChassisAsGateway=true Nov 28 07:11:36 crc kubenswrapper[4889]: ++ PhysicalNetworks= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNHostName= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 28 07:11:36 crc kubenswrapper[4889]: ++ ovs_dir=/var/lib/openvswitch Nov 28 07:11:36 crc kubenswrapper[4889]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 28 07:11:36 crc kubenswrapper[4889]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 28 07:11:36 crc kubenswrapper[4889]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 28 07:11:36 crc kubenswrapper[4889]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 28 07:11:36 crc kubenswrapper[4889]: + sleep 0.5 Nov 28 07:11:36 crc kubenswrapper[4889]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 28 07:11:36 crc kubenswrapper[4889]: + cleanup_ovsdb_server_semaphore Nov 28 07:11:36 crc kubenswrapper[4889]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 28 07:11:36 crc kubenswrapper[4889]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 28 07:11:36 crc kubenswrapper[4889]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-d2mhk" message=< Nov 28 07:11:36 crc kubenswrapper[4889]: Exiting ovsdb-server (5) [ OK ] Nov 28 07:11:36 crc kubenswrapper[4889]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 28 07:11:36 crc kubenswrapper[4889]: + source /usr/local/bin/container-scripts/functions Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNBridge=br-int Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNRemote=tcp:localhost:6642 Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNEncapType=geneve Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNAvailabilityZones= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ EnableChassisAsGateway=true Nov 28 07:11:36 crc kubenswrapper[4889]: ++ PhysicalNetworks= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNHostName= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 28 07:11:36 crc kubenswrapper[4889]: ++ ovs_dir=/var/lib/openvswitch Nov 28 07:11:36 crc kubenswrapper[4889]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 28 07:11:36 crc kubenswrapper[4889]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 28 07:11:36 crc kubenswrapper[4889]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 28 07:11:36 crc kubenswrapper[4889]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 28 07:11:36 crc kubenswrapper[4889]: + sleep 0.5 Nov 28 07:11:36 crc kubenswrapper[4889]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 28 07:11:36 crc kubenswrapper[4889]: + cleanup_ovsdb_server_semaphore Nov 28 07:11:36 crc kubenswrapper[4889]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 28 07:11:36 crc kubenswrapper[4889]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 28 07:11:36 crc kubenswrapper[4889]: > Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.681942 4889 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 28 07:11:36 crc kubenswrapper[4889]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 28 07:11:36 crc kubenswrapper[4889]: + source /usr/local/bin/container-scripts/functions Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNBridge=br-int Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNRemote=tcp:localhost:6642 Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNEncapType=geneve Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNAvailabilityZones= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ EnableChassisAsGateway=true Nov 28 07:11:36 crc kubenswrapper[4889]: ++ PhysicalNetworks= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ OVNHostName= Nov 28 07:11:36 crc kubenswrapper[4889]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 28 07:11:36 crc kubenswrapper[4889]: ++ ovs_dir=/var/lib/openvswitch Nov 28 07:11:36 crc kubenswrapper[4889]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 28 07:11:36 crc kubenswrapper[4889]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 28 07:11:36 crc kubenswrapper[4889]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 28 07:11:36 crc kubenswrapper[4889]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 28 07:11:36 crc kubenswrapper[4889]: + sleep 0.5 Nov 28 07:11:36 crc kubenswrapper[4889]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 28 07:11:36 crc kubenswrapper[4889]: + cleanup_ovsdb_server_semaphore Nov 28 07:11:36 crc kubenswrapper[4889]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 28 07:11:36 crc kubenswrapper[4889]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 28 07:11:36 crc kubenswrapper[4889]: > pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" containerID="cri-o://e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.681963 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" containerID="cri-o://e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" gracePeriod=29 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.695827 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5deb3d-df4a-48e4-844b-35247485825a-kube-api-access-kkkcz" (OuterVolumeSpecName: "kube-api-access-kkkcz") pod "fd5deb3d-df4a-48e4-844b-35247485825a" (UID: "fd5deb3d-df4a-48e4-844b-35247485825a"). InnerVolumeSpecName "kube-api-access-kkkcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.695935 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.699075 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "7c960973-a307-4a8a-9fe6-885450c512e0" (UID: "7c960973-a307-4a8a-9fe6-885450c512e0"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.713139 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.713371 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-log" containerID="cri-o://aeb659f950bddd00ce66f14e3cdebf2bdbc3d4975bbd35d09c2685e724f6146c" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.714225 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-api" containerID="cri-o://acb5766d5a413069d801db205a476d18781a4f594a9ce2359a0ea46664f3fc6f" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.723959 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c960973-a307-4a8a-9fe6-885450c512e0-kube-api-access-kcksq" (OuterVolumeSpecName: "kube-api-access-kcksq") pod "7c960973-a307-4a8a-9fe6-885450c512e0" (UID: "7c960973-a307-4a8a-9fe6-885450c512e0"). InnerVolumeSpecName "kube-api-access-kcksq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.731604 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.739104 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.739299 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-log" containerID="cri-o://e42c6a2fac386f68867d9c6f7a7a339fe2bac4979ffa2c5787f9e179f30a3979" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.739427 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-metadata" containerID="cri-o://567f961e244cb59c92bd5c9c282ae20876453ed39721849a5bc4edf9bc1b69a8" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.747478 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-548d6bf557-pbtfl"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.747797 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-548d6bf557-pbtfl" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" containerName="proxy-httpd" containerID="cri-o://6dc7556254073930e346ad003426e246a1fe721ea68cbc74809582204ec3e3ad" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.748308 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-548d6bf557-pbtfl" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" containerName="proxy-server" containerID="cri-o://def89232890ff2ea1170bf03b014fd49855e7baececf04474b47909d8032e453" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.757162 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" containerID="cri-o://de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" gracePeriod=29 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.759809 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-855dc646d8-klfjs"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.760117 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerName="barbican-keystone-listener-log" containerID="cri-o://88a22234953fdf7b7113f5a16ffb14c7f8e9a5558572a79a29816025d55b2843" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.760231 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerName="barbican-keystone-listener" containerID="cri-o://bc963ae674cb642cf73feedb96f166caf22e14565033105ea01efe00c81d6de0" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.771121 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-59dcb6998f-sb4k2"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.771327 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-59dcb6998f-sb4k2" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" containerName="barbican-worker-log" containerID="cri-o://8961c6c6cb72aa100a7094f71ba9f1994c37f8a3f3c96b49d31139ba2ab2efea" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.771647 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-59dcb6998f-sb4k2" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" containerName="barbican-worker" containerID="cri-o://ebd8b75f47303d72ac1c1453cd80c63707ba3cde640979a9031b535215433325" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.775394 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.775415 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkkcz\" (UniqueName: \"kubernetes.io/projected/fd5deb3d-df4a-48e4-844b-35247485825a-kube-api-access-kkkcz\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.775425 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fd5deb3d-df4a-48e4-844b-35247485825a-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.775435 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcksq\" (UniqueName: \"kubernetes.io/projected/7c960973-a307-4a8a-9fe6-885450c512e0-kube-api-access-kcksq\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.775443 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.775463 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.775472 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c960973-a307-4a8a-9fe6-885450c512e0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.784597 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fd84fdbd8-ztpds"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.785107 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fd84fdbd8-ztpds" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api-log" containerID="cri-o://411c51ac4022ce773c6ca107021fdf0aa7e87825c86f41edfb9eef55abeb15ae" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.785500 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fd84fdbd8-ztpds" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api" containerID="cri-o://e680db750829bfe235068d372b958d1768e839b09f9e0ae52648fe5055964fda" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.797552 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6403-account-create-update-lm2b6"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.808809 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-tzkfc"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.824439 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-tzkfc"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.834050 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6403-account-create-update-lm2b6"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.841468 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell16403-account-delete-l5wmp"] Nov 28 07:11:36 crc kubenswrapper[4889]: E1128 07:11:36.842416 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-hqjnj], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/novacell16403-account-delete-l5wmp" podUID="afb1ca65-412b-4179-ac61-4904d9f6e001" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.849410 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.849817 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d578f2c7-2fee-4032-b63e-0dc8e5d1371f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://29d04d773589b050b9a77e90cdf11d2996f36460fa7d4f5ca93bba075ac8e4fd" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.866642 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.884627 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rkslq"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.910086 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" containerName="rabbitmq" containerID="cri-o://c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761" gracePeriod=604800 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.932917 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd5deb3d-df4a-48e4-844b-35247485825a" (UID: "fd5deb3d-df4a-48e4-844b-35247485825a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.935840 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.936102 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="bf4ff6f2-105e-4f62-be58-3054d0a54fed" containerName="nova-cell1-conductor-conductor" containerID="cri-o://cb72c62f8cc63262a8e708afde0ce2707f137cbcd34fac8af65e4f38de5d9324" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.948104 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rkslq"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.972229 4889 generic.go:334] "Generic (PLEG): container finished" podID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" exitCode=0 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.972342 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2mhk" event={"ID":"d69857d8-b0ca-49bd-9d89-3ad02ec7adea","Type":"ContainerDied","Data":"e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f"} Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.978168 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd5js"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.981130 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.982795 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.983047 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="0ca42308-451d-48e1-a74f-2c7ce6c6a53a" containerName="nova-cell0-conductor-conductor" containerID="cri-o://0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10" gracePeriod=30 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.983549 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xg58q_fd5deb3d-df4a-48e4-844b-35247485825a/openstack-network-exporter/0.log" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.983601 4889 generic.go:334] "Generic (PLEG): container finished" podID="fd5deb3d-df4a-48e4-844b-35247485825a" containerID="3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662" exitCode=2 Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.983649 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xg58q" event={"ID":"fd5deb3d-df4a-48e4-844b-35247485825a","Type":"ContainerDied","Data":"3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662"} Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.983675 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xg58q" event={"ID":"fd5deb3d-df4a-48e4-844b-35247485825a","Type":"ContainerDied","Data":"c21c7f926ac799226c679abd9245c34e020745016ac8880f28de5e36bdb149c6"} Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.983692 4889 scope.go:117] "RemoveContainer" containerID="3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.983828 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xg58q" Nov 28 07:11:36 crc kubenswrapper[4889]: I1128 07:11:36.990060 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd5js"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.005992 4889 generic.go:334] "Generic (PLEG): container finished" podID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerID="22318eb16b34523322d3a94ac17704c1b438f84bf7f28f3ecaa09dfd78e54966" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.006089 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae","Type":"ContainerDied","Data":"22318eb16b34523322d3a94ac17704c1b438f84bf7f28f3ecaa09dfd78e54966"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.048227 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c960973-a307-4a8a-9fe6-885450c512e0/ovsdbserver-sb/0.log" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.048336 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c960973-a307-4a8a-9fe6-885450c512e0","Type":"ContainerDied","Data":"8f30b580ebf20094bee3cbfc1ade77b8f39a86a7ae209e023090f5b26c44dd50"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.048406 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.071117 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement75e4-account-delete-x6dpp"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.077141 4889 generic.go:334] "Generic (PLEG): container finished" podID="741842f5-b565-43c8-bd99-eb15782fcf18" containerID="8961c6c6cb72aa100a7094f71ba9f1994c37f8a3f3c96b49d31139ba2ab2efea" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.077254 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dcb6998f-sb4k2" event={"ID":"741842f5-b565-43c8-bd99-eb15782fcf18","Type":"ContainerDied","Data":"8961c6c6cb72aa100a7094f71ba9f1994c37f8a3f3c96b49d31139ba2ab2efea"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.085912 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "7c960973-a307-4a8a-9fe6-885450c512e0" (UID: "7c960973-a307-4a8a-9fe6-885450c512e0"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.088182 4889 generic.go:334] "Generic (PLEG): container finished" podID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerID="409c5ef01d2ff33efa004111267e8e87bbe31d48936823d35c2588b49a2b67eb" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.088261 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ed215c-b8d0-43fb-85bd-8531e5acf609","Type":"ContainerDied","Data":"409c5ef01d2ff33efa004111267e8e87bbe31d48936823d35c2588b49a2b67eb"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.099311 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder6a1b-account-delete-bdn66" event={"ID":"f07c52ed-8e06-4dc1-8400-09a9dba35926","Type":"ContainerStarted","Data":"79ae825f91682d3ef9316a02a63bea53ce47014451ad8747ccf4c29d0f23bd3f"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.099358 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder6a1b-account-delete-bdn66" event={"ID":"f07c52ed-8e06-4dc1-8400-09a9dba35926","Type":"ContainerStarted","Data":"ccef28d07ac74be76db68682d8aa2b359ad81156f82d5f979cd6e395ef588cae"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.107057 4889 generic.go:334] "Generic (PLEG): container finished" podID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerID="f75c2d3e942c4126e39bbb3c030aefb4924d2c9473c25ea74d8b3c3218308e58" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.107158 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91eac1f-c699-4e53-9ff8-e8326bf4e185","Type":"ContainerDied","Data":"f75c2d3e942c4126e39bbb3c030aefb4924d2c9473c25ea74d8b3c3218308e58"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.111392 4889 generic.go:334] "Generic (PLEG): container finished" podID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerID="88a22234953fdf7b7113f5a16ffb14c7f8e9a5558572a79a29816025d55b2843" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.111447 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" event={"ID":"d29dfd27-459d-4ade-8119-3c84095d0b1b","Type":"ContainerDied","Data":"88a22234953fdf7b7113f5a16ffb14c7f8e9a5558572a79a29816025d55b2843"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.134919 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" containerName="rabbitmq" containerID="cri-o://1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a" gracePeriod=604800 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.138296 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder6a1b-account-delete-bdn66" podStartSLOduration=3.138284656 podStartE2EDuration="3.138284656s" podCreationTimestamp="2025-11-28 07:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:11:37.116939574 +0000 UTC m=+1420.087173729" watchObservedRunningTime="2025-11-28 07:11:37.138284656 +0000 UTC m=+1420.108518811" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160438 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160469 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160479 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160488 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160497 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160505 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160512 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160518 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160524 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160530 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160536 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160582 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160608 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160618 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160627 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160635 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160644 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160652 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160661 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160669 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160679 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.160688 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.186265 4889 generic.go:334] "Generic (PLEG): container finished" podID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerID="aeb659f950bddd00ce66f14e3cdebf2bdbc3d4975bbd35d09c2685e724f6146c" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.186363 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660e4f27-4ee4-43d9-b155-7132c78e9a21","Type":"ContainerDied","Data":"aeb659f950bddd00ce66f14e3cdebf2bdbc3d4975bbd35d09c2685e724f6146c"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.192782 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a92a932b-ef66-408c-883e-99412a94d0da/ovsdbserver-nb/0.log" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.192831 4889 generic.go:334] "Generic (PLEG): container finished" podID="a92a932b-ef66-408c-883e-99412a94d0da" containerID="e38cd97bce0fc8d698d4e44b7375fde620f8a3ee986dc3c97e437a42647d9d7f" exitCode=2 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.192850 4889 generic.go:334] "Generic (PLEG): container finished" podID="a92a932b-ef66-408c-883e-99412a94d0da" containerID="8b167955e43f6529720269cc5280735e5f9b8f62a031ef0fc8db7679214765f7" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.192898 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a92a932b-ef66-408c-883e-99412a94d0da","Type":"ContainerDied","Data":"e38cd97bce0fc8d698d4e44b7375fde620f8a3ee986dc3c97e437a42647d9d7f"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.192923 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a92a932b-ef66-408c-883e-99412a94d0da","Type":"ContainerDied","Data":"8b167955e43f6529720269cc5280735e5f9b8f62a031ef0fc8db7679214765f7"} Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.198375 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.198412 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.198467 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data podName:90d501b3-ad2c-4fb8-814d-411dc2a11f20 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:39.198443096 +0000 UTC m=+1422.168677241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data") pod "rabbitmq-server-0" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20") : configmap "rabbitmq-config-data" not found Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.199282 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="ecf7fcae-8493-4333-96c4-d4692a144187" containerName="galera" containerID="cri-o://b28087a7afb2a256eeea56a89dcb8579fa36a7333356ade368829f77738b9428" gracePeriod=30 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.204592 4889 generic.go:334] "Generic (PLEG): container finished" podID="9a763079-28f4-4dd4-8ad8-96bc23a29fb8" containerID="3c5f80a48b3ca25b6b7f1a49e74a9bac715cb48a9bb541e28b85cc92405a168e" exitCode=137 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.214503 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.239626 4889 generic.go:334] "Generic (PLEG): container finished" podID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerID="5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.239723 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7209dbe-be81-47dd-9255-c2444debdaa9","Type":"ContainerDied","Data":"5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.242232 4889 generic.go:334] "Generic (PLEG): container finished" podID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerID="ff5c205f4bf58cd1d0ad31c563376d2141a5b307862a95d33a353065a03c5642" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.242274 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bbc5ddd4-vzclt" event={"ID":"010c335b-59f4-4016-976b-ac71eaf5d14f","Type":"ContainerDied","Data":"ff5c205f4bf58cd1d0ad31c563376d2141a5b307862a95d33a353065a03c5642"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.245489 4889 generic.go:334] "Generic (PLEG): container finished" podID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" containerID="2365c7b49a5eac186167c60fde0c3ed33a799576881ae61606acd63b56a773ae" exitCode=0 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.245532 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" event={"ID":"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb","Type":"ContainerDied","Data":"2365c7b49a5eac186167c60fde0c3ed33a799576881ae61606acd63b56a773ae"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.252287 4889 generic.go:334] "Generic (PLEG): container finished" podID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerID="e42c6a2fac386f68867d9c6f7a7a339fe2bac4979ffa2c5787f9e179f30a3979" exitCode=143 Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.252364 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.252365 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56d3b5d-d634-47f9-b252-1437066f06e8","Type":"ContainerDied","Data":"e42c6a2fac386f68867d9c6f7a7a339fe2bac4979ffa2c5787f9e179f30a3979"} Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.277860 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c960973-a307-4a8a-9fe6-885450c512e0" (UID: "7c960973-a307-4a8a-9fe6-885450c512e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.286875 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "fd5deb3d-df4a-48e4-844b-35247485825a" (UID: "fd5deb3d-df4a-48e4-844b-35247485825a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.302065 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.302094 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5deb3d-df4a-48e4-844b-35247485825a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.302103 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.318994 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7c960973-a307-4a8a-9fe6-885450c512e0" (UID: "7c960973-a307-4a8a-9fe6-885450c512e0"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.322939 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glancef2a1-account-delete-wwplw"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.403604 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjnj\" (UniqueName: \"kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj\") pod \"novacell16403-account-delete-l5wmp\" (UID: \"afb1ca65-412b-4179-ac61-4904d9f6e001\") " pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.403758 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c960973-a307-4a8a-9fe6-885450c512e0-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.415517 4889 projected.go:194] Error preparing data for projected volume kube-api-access-hqjnj for pod openstack/novacell16403-account-delete-l5wmp: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.415595 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj podName:afb1ca65-412b-4179-ac61-4904d9f6e001 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:39.415576688 +0000 UTC m=+1422.385810843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqjnj" (UniqueName: "kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj") pod "novacell16403-account-delete-l5wmp" (UID: "afb1ca65-412b-4179-ac61-4904d9f6e001") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.494784 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f08826-4d6a-453d-8681-52d2446a5918" path="/var/lib/kubelet/pods/30f08826-4d6a-453d-8681-52d2446a5918/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.495679 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57fa70ad-b991-42d2-8d9c-53b9d19e6045" path="/var/lib/kubelet/pods/57fa70ad-b991-42d2-8d9c-53b9d19e6045/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.496453 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c74af7d-0271-4b1d-8c93-88d33ca6329c" path="/var/lib/kubelet/pods/5c74af7d-0271-4b1d-8c93-88d33ca6329c/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.505988 4889 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.506055 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts podName:afb1ca65-412b-4179-ac61-4904d9f6e001 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:39.506037944 +0000 UTC m=+1422.476272099 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts") pod "novacell16403-account-delete-l5wmp" (UID: "afb1ca65-412b-4179-ac61-4904d9f6e001") : configmap "openstack-cell1-scripts" not found Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.508161 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785a729e-1203-4d90-9bc1-447968cd6ffa" path="/var/lib/kubelet/pods/785a729e-1203-4d90-9bc1-447968cd6ffa/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.509076 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6cc417-c977-4f6e-8e9c-b420b524d3d5" path="/var/lib/kubelet/pods/8d6cc417-c977-4f6e-8e9c-b420b524d3d5/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.510210 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31734ad-17b6-497c-a83b-3e960ff9291c" path="/var/lib/kubelet/pods/a31734ad-17b6-497c-a83b-3e960ff9291c/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.513853 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.514953 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.516052 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3ba7d9-162a-4393-804b-0713bcc88a9c" path="/var/lib/kubelet/pods/ba3ba7d9-162a-4393-804b-0713bcc88a9c/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.517479 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c93fa965-a510-4c13-b946-51150bd493e1" path="/var/lib/kubelet/pods/c93fa965-a510-4c13-b946-51150bd493e1/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.518143 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb878697-faf4-4e49-9d9c-54f02215856b" path="/var/lib/kubelet/pods/cb878697-faf4-4e49-9d9c-54f02215856b/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.518641 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.518693 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="ovn-northd" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.522951 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da834219-0eb6-44f4-9e57-81a4ef2c201c" path="/var/lib/kubelet/pods/da834219-0eb6-44f4-9e57-81a4ef2c201c/volumes" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.525830 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican9d4e-account-delete-w2cq4"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.525865 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron08e6-account-delete-rzzxh"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.602684 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell03d10-account-delete-vhnfs"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.638079 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.639848 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.639908 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi670d-account-delete-q5q9k"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.657740 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.669881 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.671937 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.692724 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-xg58q"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.694886 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a92a932b-ef66-408c-883e-99412a94d0da/ovsdbserver-nb/0.log" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.694983 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.708868 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-xg58q"] Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709552 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config-secret\") pod \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709582 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-nb\") pod \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709629 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-metrics-certs-tls-certs\") pod \"a92a932b-ef66-408c-883e-99412a94d0da\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709650 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts\") pod \"afb1ca65-412b-4179-ac61-4904d9f6e001\" (UID: \"afb1ca65-412b-4179-ac61-4904d9f6e001\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709673 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-config\") pod \"a92a932b-ef66-408c-883e-99412a94d0da\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709724 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-combined-ca-bundle\") pod \"a92a932b-ef66-408c-883e-99412a94d0da\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709742 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-svc\") pod \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709792 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ct5m\" (UniqueName: \"kubernetes.io/projected/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-kube-api-access-5ct5m\") pod \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709833 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a92a932b-ef66-408c-883e-99412a94d0da-ovsdb-rundir\") pod \"a92a932b-ef66-408c-883e-99412a94d0da\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709855 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-combined-ca-bundle\") pod \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709877 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-sb\") pod \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.709900 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4tqk\" (UniqueName: \"kubernetes.io/projected/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-kube-api-access-l4tqk\") pod \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.711506 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a92a932b-ef66-408c-883e-99412a94d0da\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.711532 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-config\") pod \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.711574 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config\") pod \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\" (UID: \"9a763079-28f4-4dd4-8ad8-96bc23a29fb8\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.711598 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-scripts\") pod \"a92a932b-ef66-408c-883e-99412a94d0da\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.711618 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-ovsdbserver-nb-tls-certs\") pod \"a92a932b-ef66-408c-883e-99412a94d0da\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.711876 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx55z\" (UniqueName: \"kubernetes.io/projected/a92a932b-ef66-408c-883e-99412a94d0da-kube-api-access-dx55z\") pod \"a92a932b-ef66-408c-883e-99412a94d0da\" (UID: \"a92a932b-ef66-408c-883e-99412a94d0da\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.713051 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-swift-storage-0\") pod \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\" (UID: \"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb\") " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.711804 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92a932b-ef66-408c-883e-99412a94d0da-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a92a932b-ef66-408c-883e-99412a94d0da" (UID: "a92a932b-ef66-408c-883e-99412a94d0da"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.713518 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-config" (OuterVolumeSpecName: "config") pod "a92a932b-ef66-408c-883e-99412a94d0da" (UID: "a92a932b-ef66-408c-883e-99412a94d0da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.713894 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afb1ca65-412b-4179-ac61-4904d9f6e001" (UID: "afb1ca65-412b-4179-ac61-4904d9f6e001"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.723890 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb1ca65-412b-4179-ac61-4904d9f6e001-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.736644 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.738253 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a92a932b-ef66-408c-883e-99412a94d0da-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.726187 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-scripts" (OuterVolumeSpecName: "scripts") pod "a92a932b-ef66-408c-883e-99412a94d0da" (UID: "a92a932b-ef66-408c-883e-99412a94d0da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.731831 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-kube-api-access-l4tqk" (OuterVolumeSpecName: "kube-api-access-l4tqk") pod "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" (UID: "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb"). InnerVolumeSpecName "kube-api-access-l4tqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.768613 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a92a932b-ef66-408c-883e-99412a94d0da" (UID: "a92a932b-ef66-408c-883e-99412a94d0da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.776797 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "a92a932b-ef66-408c-883e-99412a94d0da" (UID: "a92a932b-ef66-408c-883e-99412a94d0da"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.776967 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92a932b-ef66-408c-883e-99412a94d0da-kube-api-access-dx55z" (OuterVolumeSpecName: "kube-api-access-dx55z") pod "a92a932b-ef66-408c-883e-99412a94d0da" (UID: "a92a932b-ef66-408c-883e-99412a94d0da"). InnerVolumeSpecName "kube-api-access-dx55z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.777638 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-kube-api-access-5ct5m" (OuterVolumeSpecName: "kube-api-access-5ct5m") pod "9a763079-28f4-4dd4-8ad8-96bc23a29fb8" (UID: "9a763079-28f4-4dd4-8ad8-96bc23a29fb8"). InnerVolumeSpecName "kube-api-access-5ct5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.810234 4889 scope.go:117] "RemoveContainer" containerID="3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662" Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.811014 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662\": container with ID starting with 3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662 not found: ID does not exist" containerID="3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.811072 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662"} err="failed to get container status \"3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662\": rpc error: code = NotFound desc = could not find container \"3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662\": container with ID starting with 3998a8dd252302a1576f87d0fa97aba3ab4a5cce32856306fde949c2005a0662 not found: ID does not exist" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.811158 4889 scope.go:117] "RemoveContainer" containerID="5eb83b765e57ee122fbe625e86ad95bb06d3206e2ca82bde40a2997e84a961fb" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.840567 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.840677 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ct5m\" (UniqueName: \"kubernetes.io/projected/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-kube-api-access-5ct5m\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.840694 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4tqk\" (UniqueName: \"kubernetes.io/projected/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-kube-api-access-l4tqk\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.840751 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.840764 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a92a932b-ef66-408c-883e-99412a94d0da-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.840819 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx55z\" (UniqueName: \"kubernetes.io/projected/a92a932b-ef66-408c-883e-99412a94d0da-kube-api-access-dx55z\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.881358 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b28087a7afb2a256eeea56a89dcb8579fa36a7333356ade368829f77738b9428" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.889839 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b28087a7afb2a256eeea56a89dcb8579fa36a7333356ade368829f77738b9428" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.891277 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b28087a7afb2a256eeea56a89dcb8579fa36a7333356ade368829f77738b9428" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 28 07:11:37 crc kubenswrapper[4889]: E1128 07:11:37.891390 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ecf7fcae-8493-4333-96c4-d4692a144187" containerName="galera" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.922258 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" (UID: "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.927897 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a92a932b-ef66-408c-883e-99412a94d0da" (UID: "a92a932b-ef66-408c-883e-99412a94d0da"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.941878 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" (UID: "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.942485 4889 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.942508 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.942518 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.955410 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.970191 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" (UID: "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:37 crc kubenswrapper[4889]: I1128 07:11:37.970259 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9a763079-28f4-4dd4-8ad8-96bc23a29fb8" (UID: "9a763079-28f4-4dd4-8ad8-96bc23a29fb8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.005776 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9a763079-28f4-4dd4-8ad8-96bc23a29fb8" (UID: "9a763079-28f4-4dd4-8ad8-96bc23a29fb8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.013211 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-config" (OuterVolumeSpecName: "config") pod "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" (UID: "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.020468 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a763079-28f4-4dd4-8ad8-96bc23a29fb8" (UID: "9a763079-28f4-4dd4-8ad8-96bc23a29fb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.033111 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" (UID: "8c1f8a48-5ca3-46e1-8246-b8c6737b45cb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.044177 4889 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.044208 4889 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.044218 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.044227 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.044238 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.044246 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.044254 4889 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a763079-28f4-4dd4-8ad8-96bc23a29fb8-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.053410 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "a92a932b-ef66-408c-883e-99412a94d0da" (UID: "a92a932b-ef66-408c-883e-99412a94d0da"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:38 crc kubenswrapper[4889]: I1128 07:11:38.147655 4889 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92a932b-ef66-408c-883e-99412a94d0da-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.248405 4889 scope.go:117] "RemoveContainer" containerID="9c0bdc3b1d5da3bad6cec36b156ddd5f2770493a35c89a25fb3741002c171edc" Nov 28 07:11:39 crc kubenswrapper[4889]: E1128 07:11:38.249433 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:39 crc kubenswrapper[4889]: E1128 07:11:38.249496 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data podName:9b744978-786e-4ab0-8a5c-1e8e3f9a2809 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:42.249476905 +0000 UTC m=+1425.219711060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data") pod "rabbitmq-cell1-server-0" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.277781 4889 generic.go:334] "Generic (PLEG): container finished" podID="ecf7fcae-8493-4333-96c4-d4692a144187" containerID="b28087a7afb2a256eeea56a89dcb8579fa36a7333356ade368829f77738b9428" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.277840 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ecf7fcae-8493-4333-96c4-d4692a144187","Type":"ContainerDied","Data":"b28087a7afb2a256eeea56a89dcb8579fa36a7333356ade368829f77738b9428"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.279004 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron08e6-account-delete-rzzxh" event={"ID":"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d","Type":"ContainerStarted","Data":"023b56bbd7654a9dd84d169850cac13869a43fb115ce30e6f66f88f81571cb5d"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.280871 4889 generic.go:334] "Generic (PLEG): container finished" podID="8cff4827-368d-4e19-beb0-b22b71032f26" containerID="def89232890ff2ea1170bf03b014fd49855e7baececf04474b47909d8032e453" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.280889 4889 generic.go:334] "Generic (PLEG): container finished" podID="8cff4827-368d-4e19-beb0-b22b71032f26" containerID="6dc7556254073930e346ad003426e246a1fe721ea68cbc74809582204ec3e3ad" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.280920 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-548d6bf557-pbtfl" event={"ID":"8cff4827-368d-4e19-beb0-b22b71032f26","Type":"ContainerDied","Data":"def89232890ff2ea1170bf03b014fd49855e7baececf04474b47909d8032e453"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.280937 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-548d6bf557-pbtfl" event={"ID":"8cff4827-368d-4e19-beb0-b22b71032f26","Type":"ContainerDied","Data":"6dc7556254073930e346ad003426e246a1fe721ea68cbc74809582204ec3e3ad"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.280946 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-548d6bf557-pbtfl" event={"ID":"8cff4827-368d-4e19-beb0-b22b71032f26","Type":"ContainerDied","Data":"9d34ec502260920510047850ef0fe1d6c9af40106c243576c843ed178ee5cbea"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.280955 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d34ec502260920510047850ef0fe1d6c9af40106c243576c843ed178ee5cbea" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.281797 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican9d4e-account-delete-w2cq4" event={"ID":"5b9c3bd5-587a-40cb-b489-764fd5f98ca0","Type":"ContainerStarted","Data":"0cd213f74142ef4a1218558a759dd8f4bb5aceda309f20fbc7fcb0ff54186285"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.284926 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancef2a1-account-delete-wwplw" event={"ID":"32d7e485-1911-4206-bf42-9a57a855a880","Type":"ContainerStarted","Data":"13aa4d1181851fe06902bc52a954dd70b2a6c609df51ae6f31dd02eff6f0ff68"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.287935 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a92a932b-ef66-408c-883e-99412a94d0da/ovsdbserver-nb/0.log" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.287978 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a92a932b-ef66-408c-883e-99412a94d0da","Type":"ContainerDied","Data":"296f76de673da2298bf4e685ff9c9657b82484245813d137ff9d0927ade544c1"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.288035 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.300476 4889 generic.go:334] "Generic (PLEG): container finished" podID="8da3d6a4-5874-4305-b358-9765720b68f9" containerID="ccd9a3451af7543863610a3361f75c597a7f1f94847eeb509c2590583ff9a2bb" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.300540 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement75e4-account-delete-x6dpp" event={"ID":"8da3d6a4-5874-4305-b358-9765720b68f9","Type":"ContainerDied","Data":"ccd9a3451af7543863610a3361f75c597a7f1f94847eeb509c2590583ff9a2bb"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.300567 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement75e4-account-delete-x6dpp" event={"ID":"8da3d6a4-5874-4305-b358-9765720b68f9","Type":"ContainerStarted","Data":"a66348010096f93673a1cfa1653322bdd71b6b74a351bd0c0b4ba5b6b84972cf"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.306367 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" event={"ID":"8c1f8a48-5ca3-46e1-8246-b8c6737b45cb","Type":"ContainerDied","Data":"e1687fc4fb4c147c5234087ff66a74472664de15b21ffc743ce7d798f4241678"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.306453 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8fc4ccc9-wc58j" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.307748 4889 generic.go:334] "Generic (PLEG): container finished" podID="d578f2c7-2fee-4032-b63e-0dc8e5d1371f" containerID="29d04d773589b050b9a77e90cdf11d2996f36460fa7d4f5ca93bba075ac8e4fd" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.307804 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d578f2c7-2fee-4032-b63e-0dc8e5d1371f","Type":"ContainerDied","Data":"29d04d773589b050b9a77e90cdf11d2996f36460fa7d4f5ca93bba075ac8e4fd"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.307826 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d578f2c7-2fee-4032-b63e-0dc8e5d1371f","Type":"ContainerDied","Data":"735e526bec38bdc5a76467769391ff95a9ff6e58007eaa305c7bc47cdd7e8aad"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.307837 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735e526bec38bdc5a76467769391ff95a9ff6e58007eaa305c7bc47cdd7e8aad" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.334994 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.335027 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.335038 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.335114 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.335144 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.335161 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.341604 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.354311 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell03d10-account-delete-vhnfs" event={"ID":"fe87e12e-e732-4a38-b9bc-0e6000da9bd8","Type":"ContainerStarted","Data":"0166b6ea79f4750dee5e9324971f74bd91d64abe3a07dbd6c925e551b04a9c45"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.356958 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi670d-account-delete-q5q9k" event={"ID":"00c7d31d-27e7-45cc-abb6-bae21de9135f","Type":"ContainerStarted","Data":"698e96c7a3d29e1670c0cf8b2281b42f1c0d44611909a931b635d677935e5a02"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.358861 4889 generic.go:334] "Generic (PLEG): container finished" podID="c41bad87-7181-45c9-ad09-bf49b278416d" containerID="411c51ac4022ce773c6ca107021fdf0aa7e87825c86f41edfb9eef55abeb15ae" exitCode=143 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.358910 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fd84fdbd8-ztpds" event={"ID":"c41bad87-7181-45c9-ad09-bf49b278416d","Type":"ContainerDied","Data":"411c51ac4022ce773c6ca107021fdf0aa7e87825c86f41edfb9eef55abeb15ae"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.360458 4889 generic.go:334] "Generic (PLEG): container finished" podID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerID="a93ee07b36d6a14a36fd7e9a347daa5368daca2560780121eb1189c052d07f4b" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.360499 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c99d75dcc-cgtnj" event={"ID":"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef","Type":"ContainerDied","Data":"a93ee07b36d6a14a36fd7e9a347daa5368daca2560780121eb1189c052d07f4b"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.362929 4889 generic.go:334] "Generic (PLEG): container finished" podID="f07c52ed-8e06-4dc1-8400-09a9dba35926" containerID="79ae825f91682d3ef9316a02a63bea53ce47014451ad8747ccf4c29d0f23bd3f" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.362991 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell16403-account-delete-l5wmp" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.365125 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder6a1b-account-delete-bdn66" event={"ID":"f07c52ed-8e06-4dc1-8400-09a9dba35926","Type":"ContainerDied","Data":"79ae825f91682d3ef9316a02a63bea53ce47014451ad8747ccf4c29d0f23bd3f"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.394073 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.437316 4889 scope.go:117] "RemoveContainer" containerID="e38cd97bce0fc8d698d4e44b7375fde620f8a3ee986dc3c97e437a42647d9d7f" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.447206 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.452472 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-nova-novncproxy-tls-certs\") pod \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.452519 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-vencrypt-tls-certs\") pod \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.452554 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-config-data\") pod \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.452823 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd87p\" (UniqueName: \"kubernetes.io/projected/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-kube-api-access-kd87p\") pod \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.452853 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-combined-ca-bundle\") pod \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\" (UID: \"d578f2c7-2fee-4032-b63e-0dc8e5d1371f\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.458010 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-kube-api-access-kd87p" (OuterVolumeSpecName: "kube-api-access-kd87p") pod "d578f2c7-2fee-4032-b63e-0dc8e5d1371f" (UID: "d578f2c7-2fee-4032-b63e-0dc8e5d1371f"). InnerVolumeSpecName "kube-api-access-kd87p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.464628 4889 scope.go:117] "RemoveContainer" containerID="8b167955e43f6529720269cc5280735e5f9b8f62a031ef0fc8db7679214765f7" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.465990 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd87p\" (UniqueName: \"kubernetes.io/projected/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-kube-api-access-kd87p\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.497332 4889 scope.go:117] "RemoveContainer" containerID="2365c7b49a5eac186167c60fde0c3ed33a799576881ae61606acd63b56a773ae" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.506161 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.541971 4889 scope.go:117] "RemoveContainer" containerID="8e098cfc463032d4804741da5efedf5c6767301dd053935296f53b49eb1889cf" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.562013 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.566649 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-galera-tls-certs\") pod \"ecf7fcae-8493-4333-96c4-d4692a144187\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.566688 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-log-httpd\") pod \"8cff4827-368d-4e19-beb0-b22b71032f26\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.566794 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-config-data\") pod \"8cff4827-368d-4e19-beb0-b22b71032f26\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.566856 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-combined-ca-bundle\") pod \"8cff4827-368d-4e19-beb0-b22b71032f26\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.566940 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-public-tls-certs\") pod \"8cff4827-368d-4e19-beb0-b22b71032f26\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.566969 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-etc-swift\") pod \"8cff4827-368d-4e19-beb0-b22b71032f26\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567010 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2zsp\" (UniqueName: \"kubernetes.io/projected/ecf7fcae-8493-4333-96c4-d4692a144187-kube-api-access-p2zsp\") pod \"ecf7fcae-8493-4333-96c4-d4692a144187\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567071 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ecf7fcae-8493-4333-96c4-d4692a144187\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567112 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-default\") pod \"ecf7fcae-8493-4333-96c4-d4692a144187\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567150 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-internal-tls-certs\") pod \"8cff4827-368d-4e19-beb0-b22b71032f26\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567176 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-kolla-config\") pod \"ecf7fcae-8493-4333-96c4-d4692a144187\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567209 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-operator-scripts\") pod \"ecf7fcae-8493-4333-96c4-d4692a144187\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567245 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-generated\") pod \"ecf7fcae-8493-4333-96c4-d4692a144187\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567281 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-combined-ca-bundle\") pod \"ecf7fcae-8493-4333-96c4-d4692a144187\" (UID: \"ecf7fcae-8493-4333-96c4-d4692a144187\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567311 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk7gs\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-kube-api-access-bk7gs\") pod \"8cff4827-368d-4e19-beb0-b22b71032f26\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.567335 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-run-httpd\") pod \"8cff4827-368d-4e19-beb0-b22b71032f26\" (UID: \"8cff4827-368d-4e19-beb0-b22b71032f26\") " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.570825 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.571476 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ecf7fcae-8493-4333-96c4-d4692a144187" (UID: "ecf7fcae-8493-4333-96c4-d4692a144187"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.571875 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "ecf7fcae-8493-4333-96c4-d4692a144187" (UID: "ecf7fcae-8493-4333-96c4-d4692a144187"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.576284 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8fc4ccc9-wc58j"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.578312 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ecf7fcae-8493-4333-96c4-d4692a144187" (UID: "ecf7fcae-8493-4333-96c4-d4692a144187"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.582150 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8cff4827-368d-4e19-beb0-b22b71032f26" (UID: "8cff4827-368d-4e19-beb0-b22b71032f26"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.583001 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "ecf7fcae-8493-4333-96c4-d4692a144187" (UID: "ecf7fcae-8493-4333-96c4-d4692a144187"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.583252 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8cff4827-368d-4e19-beb0-b22b71032f26" (UID: "8cff4827-368d-4e19-beb0-b22b71032f26"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.585352 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d8fc4ccc9-wc58j"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.595541 4889 scope.go:117] "RemoveContainer" containerID="3c5f80a48b3ca25b6b7f1a49e74a9bac715cb48a9bb541e28b85cc92405a168e" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.617756 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell16403-account-delete-l5wmp"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.618633 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-kube-api-access-bk7gs" (OuterVolumeSpecName: "kube-api-access-bk7gs") pod "8cff4827-368d-4e19-beb0-b22b71032f26" (UID: "8cff4827-368d-4e19-beb0-b22b71032f26"). InnerVolumeSpecName "kube-api-access-bk7gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.623094 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8cff4827-368d-4e19-beb0-b22b71032f26" (UID: "8cff4827-368d-4e19-beb0-b22b71032f26"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.623897 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf7fcae-8493-4333-96c4-d4692a144187-kube-api-access-p2zsp" (OuterVolumeSpecName: "kube-api-access-p2zsp") pod "ecf7fcae-8493-4333-96c4-d4692a144187" (UID: "ecf7fcae-8493-4333-96c4-d4692a144187"). InnerVolumeSpecName "kube-api-access-p2zsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.631530 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell16403-account-delete-l5wmp"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.635974 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "ecf7fcae-8493-4333-96c4-d4692a144187" (UID: "ecf7fcae-8493-4333-96c4-d4692a144187"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670231 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670266 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqjnj\" (UniqueName: \"kubernetes.io/projected/afb1ca65-412b-4179-ac61-4904d9f6e001-kube-api-access-hqjnj\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670283 4889 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670295 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecf7fcae-8493-4333-96c4-d4692a144187-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670309 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ecf7fcae-8493-4333-96c4-d4692a144187-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670321 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk7gs\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-kube-api-access-bk7gs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670332 4889 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670343 4889 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cff4827-368d-4e19-beb0-b22b71032f26-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670355 4889 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cff4827-368d-4e19-beb0-b22b71032f26-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670366 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2zsp\" (UniqueName: \"kubernetes.io/projected/ecf7fcae-8493-4333-96c4-d4692a144187-kube-api-access-p2zsp\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.670391 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.691884 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-config-data" (OuterVolumeSpecName: "config-data") pod "d578f2c7-2fee-4032-b63e-0dc8e5d1371f" (UID: "d578f2c7-2fee-4032-b63e-0dc8e5d1371f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.723340 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d578f2c7-2fee-4032-b63e-0dc8e5d1371f" (UID: "d578f2c7-2fee-4032-b63e-0dc8e5d1371f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.738284 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.755140 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecf7fcae-8493-4333-96c4-d4692a144187" (UID: "ecf7fcae-8493-4333-96c4-d4692a144187"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.792077 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.792109 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.792126 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.792140 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.817740 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "d578f2c7-2fee-4032-b63e-0dc8e5d1371f" (UID: "d578f2c7-2fee-4032-b63e-0dc8e5d1371f"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.822004 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8cff4827-368d-4e19-beb0-b22b71032f26" (UID: "8cff4827-368d-4e19-beb0-b22b71032f26"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.825407 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "ecf7fcae-8493-4333-96c4-d4692a144187" (UID: "ecf7fcae-8493-4333-96c4-d4692a144187"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.835915 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8cff4827-368d-4e19-beb0-b22b71032f26" (UID: "8cff4827-368d-4e19-beb0-b22b71032f26"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.840817 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-config-data" (OuterVolumeSpecName: "config-data") pod "8cff4827-368d-4e19-beb0-b22b71032f26" (UID: "8cff4827-368d-4e19-beb0-b22b71032f26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.856641 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cff4827-368d-4e19-beb0-b22b71032f26" (UID: "8cff4827-368d-4e19-beb0-b22b71032f26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.869912 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "d578f2c7-2fee-4032-b63e-0dc8e5d1371f" (UID: "d578f2c7-2fee-4032-b63e-0dc8e5d1371f"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.893541 4889 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf7fcae-8493-4333-96c4-d4692a144187-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.893569 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.893579 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.893590 4889 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.893600 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.893608 4889 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d578f2c7-2fee-4032-b63e-0dc8e5d1371f-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.893615 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cff4827-368d-4e19-beb0-b22b71032f26-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.967615 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.967893 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="ceilometer-central-agent" containerID="cri-o://5b791b0ee5ba22707eb669b678aaed1200bebe3fe2bc24e3b032cc3e5c25310a" gracePeriod=30 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.968230 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="proxy-httpd" containerID="cri-o://0c2cdf84f726e62f45e47d4328d523cb24c975652d68073e00aa714625b828c0" gracePeriod=30 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.968265 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="sg-core" containerID="cri-o://eafa471d1e83e5c4174ab8b6222ceaeab6bcb18dd8a04cfa44cdd7c9aaae7176" gracePeriod=30 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.968311 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="ceilometer-notification-agent" containerID="cri-o://206b0078bfc628bc38b7ee44283277823901e99e7548ec8345145fcecd5a4005" gracePeriod=30 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.998624 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:38.998846 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f9aacedc-5e53-4c26-8ded-2af578a7de41" containerName="kube-state-metrics" containerID="cri-o://cfce3bc5d6f0828a73170fd49a5f64b6f79394b204fb2e4a2576389017af7153" gracePeriod=30 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.133678 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.134260 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="5276ecd4-549a-4a41-94be-6408535b2492" containerName="memcached" containerID="cri-o://49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409" gracePeriod=30 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.178687 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-r6j84"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.212456 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vvjhk"] Nov 28 07:11:39 crc kubenswrapper[4889]: E1128 07:11:39.220930 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:11:39 crc kubenswrapper[4889]: E1128 07:11:39.221000 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data podName:90d501b3-ad2c-4fb8-814d-411dc2a11f20 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:43.220985954 +0000 UTC m=+1426.191220109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data") pod "rabbitmq-server-0" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20") : configmap "rabbitmq-config-data" not found Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.235565 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-r6j84"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.257988 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.275668 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vvjhk"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.288351 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-55c8d644db-cqxsn"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.288687 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-55c8d644db-cqxsn" podUID="07dfa6e3-4c33-403d-96c6-819c44224466" containerName="keystone-api" containerID="cri-o://6a06f1ca551a6cfc2a03c4624310248aaa2f03752d3fc88f4cfb44ec7049ede3" gracePeriod=30 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.303066 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5fjcb"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.310995 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5fjcb"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.320518 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-df7b-account-create-update-8ltbq"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.327751 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-df7b-account-create-update-8ltbq"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.342520 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0713e6-6a1f-45ee-9929-4ab652d46e06" path="/var/lib/kubelet/pods/0c0713e6-6a1f-45ee-9929-4ab652d46e06/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.343139 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38118581-7a75-43c6-82b5-cbcf739b47b8" path="/var/lib/kubelet/pods/38118581-7a75-43c6-82b5-cbcf739b47b8/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.343988 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" path="/var/lib/kubelet/pods/7c960973-a307-4a8a-9fe6-885450c512e0/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.344535 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" path="/var/lib/kubelet/pods/8c1f8a48-5ca3-46e1-8246-b8c6737b45cb/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.345498 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a763079-28f4-4dd4-8ad8-96bc23a29fb8" path="/var/lib/kubelet/pods/9a763079-28f4-4dd4-8ad8-96bc23a29fb8/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.346089 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92a932b-ef66-408c-883e-99412a94d0da" path="/var/lib/kubelet/pods/a92a932b-ef66-408c-883e-99412a94d0da/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.346575 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb1ca65-412b-4179-ac61-4904d9f6e001" path="/var/lib/kubelet/pods/afb1ca65-412b-4179-ac61-4904d9f6e001/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.347468 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e1fb75-c9d4-40d4-97fa-41162ea57360" path="/var/lib/kubelet/pods/b5e1fb75-c9d4-40d4-97fa-41162ea57360/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.348139 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85" path="/var/lib/kubelet/pods/fc1cb2bd-21c2-4c08-9ad4-3eb7a20ffc85/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.348665 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5deb3d-df4a-48e4-844b-35247485825a" path="/var/lib/kubelet/pods/fd5deb3d-df4a-48e4-844b-35247485825a/volumes" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.403928 4889 generic.go:334] "Generic (PLEG): container finished" podID="5b9c3bd5-587a-40cb-b489-764fd5f98ca0" containerID="4df87c7cf9cdf092f6514707cb04ba80cb0fb8d948bf9c9993b17b16cfe34085" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.404005 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican9d4e-account-delete-w2cq4" event={"ID":"5b9c3bd5-587a-40cb-b489-764fd5f98ca0","Type":"ContainerDied","Data":"4df87c7cf9cdf092f6514707cb04ba80cb0fb8d948bf9c9993b17b16cfe34085"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.409036 4889 generic.go:334] "Generic (PLEG): container finished" podID="f9aacedc-5e53-4c26-8ded-2af578a7de41" containerID="cfce3bc5d6f0828a73170fd49a5f64b6f79394b204fb2e4a2576389017af7153" exitCode=2 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.409079 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9aacedc-5e53-4c26-8ded-2af578a7de41","Type":"ContainerDied","Data":"cfce3bc5d6f0828a73170fd49a5f64b6f79394b204fb2e4a2576389017af7153"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.414671 4889 generic.go:334] "Generic (PLEG): container finished" podID="00c7d31d-27e7-45cc-abb6-bae21de9135f" containerID="c8514c1d93d6758c1c17ee226579c728c34fd0e086a2444dc40c4d5d2304872e" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.414721 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi670d-account-delete-q5q9k" event={"ID":"00c7d31d-27e7-45cc-abb6-bae21de9135f","Type":"ContainerDied","Data":"c8514c1d93d6758c1c17ee226579c728c34fd0e086a2444dc40c4d5d2304872e"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.424660 4889 generic.go:334] "Generic (PLEG): container finished" podID="32d7e485-1911-4206-bf42-9a57a855a880" containerID="258eebe34d9ee1f7f4a1d22b1cba432730424374a547da27abf949d733c647d0" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.424751 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancef2a1-account-delete-wwplw" event={"ID":"32d7e485-1911-4206-bf42-9a57a855a880","Type":"ContainerDied","Data":"258eebe34d9ee1f7f4a1d22b1cba432730424374a547da27abf949d733c647d0"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.433250 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ecf7fcae-8493-4333-96c4-d4692a144187","Type":"ContainerDied","Data":"f9d98a4cd9c523561c1569cff3e3e23fb5469b8845e4f479b87c6cfd2df72bd9"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.433295 4889 scope.go:117] "RemoveContainer" containerID="b28087a7afb2a256eeea56a89dcb8579fa36a7333356ade368829f77738b9428" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.433645 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.444851 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-47lc7"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.452983 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-47lc7"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.470198 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder6a1b-account-delete-bdn66"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.484222 4889 generic.go:334] "Generic (PLEG): container finished" podID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerID="0c2cdf84f726e62f45e47d4328d523cb24c975652d68073e00aa714625b828c0" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.484248 4889 generic.go:334] "Generic (PLEG): container finished" podID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerID="eafa471d1e83e5c4174ab8b6222ceaeab6bcb18dd8a04cfa44cdd7c9aaae7176" exitCode=2 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.484300 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerDied","Data":"0c2cdf84f726e62f45e47d4328d523cb24c975652d68073e00aa714625b828c0"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.484327 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerDied","Data":"eafa471d1e83e5c4174ab8b6222ceaeab6bcb18dd8a04cfa44cdd7c9aaae7176"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.488410 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron08e6-account-delete-rzzxh" event={"ID":"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d","Type":"ContainerStarted","Data":"1557253848b8173208cc1bcb66293e44e9523ff9fbd1b1b06ff1d6db2d81cb11"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.504257 4889 generic.go:334] "Generic (PLEG): container finished" podID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerID="916841af475c0d0409c239e605ccdb71c123e2852a495b97c814602f89fea785" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.504347 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bbc5ddd4-vzclt" event={"ID":"010c335b-59f4-4016-976b-ac71eaf5d14f","Type":"ContainerDied","Data":"916841af475c0d0409c239e605ccdb71c123e2852a495b97c814602f89fea785"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.514511 4889 generic.go:334] "Generic (PLEG): container finished" podID="fe87e12e-e732-4a38-b9bc-0e6000da9bd8" containerID="b03a3bf0facdd489729d5f0987b893b7764a5dde678763f6b8bcdbcc09c73388" exitCode=0 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.514953 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell03d10-account-delete-vhnfs" event={"ID":"fe87e12e-e732-4a38-b9bc-0e6000da9bd8","Type":"ContainerDied","Data":"b03a3bf0facdd489729d5f0987b893b7764a5dde678763f6b8bcdbcc09c73388"} Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.516184 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.161:8776/healthcheck\": read tcp 10.217.0.2:33534->10.217.0.161:8776: read: connection reset by peer" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.523670 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.524498 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-548d6bf557-pbtfl" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.543977 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6a1b-account-create-update-q5wnf"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.586805 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6a1b-account-create-update-q5wnf"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.622020 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.636792 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.658132 4889 scope.go:117] "RemoveContainer" containerID="107d9bd44e85df18c82b4aeccfad805cf3cde4845822859d34619bdc83c08a53" Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.670771 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.678423 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.724400 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nstbj"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.737059 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nstbj"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.745009 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-548d6bf557-pbtfl"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.776218 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-548d6bf557-pbtfl"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.789034 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement75e4-account-delete-x6dpp"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.816331 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" containerName="galera" containerID="cri-o://55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f" gracePeriod=30 Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.824080 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-75e4-account-create-update-bkcxt"] Nov 28 07:11:39 crc kubenswrapper[4889]: I1128 07:11:39.839627 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-75e4-account-create-update-bkcxt"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.011990 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hgc8n"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.027867 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hgc8n"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.066010 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9d4e-account-create-update-skdhq"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.080998 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican9d4e-account-delete-w2cq4"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.092276 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9d4e-account-create-update-skdhq"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.105226 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fd84fdbd8-ztpds" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:47974->10.217.0.160:9311: read: connection reset by peer" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.106810 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fd84fdbd8-ztpds" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:47976->10.217.0.160:9311: read: connection reset by peer" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.110746 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.120892 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7grgp"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.135761 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7grgp"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.143338 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-combined-ca-bundle\") pod \"f9aacedc-5e53-4c26-8ded-2af578a7de41\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.143437 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brqw6\" (UniqueName: \"kubernetes.io/projected/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-api-access-brqw6\") pod \"f9aacedc-5e53-4c26-8ded-2af578a7de41\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.143519 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-config\") pod \"f9aacedc-5e53-4c26-8ded-2af578a7de41\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.143578 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-certs\") pod \"f9aacedc-5e53-4c26-8ded-2af578a7de41\" (UID: \"f9aacedc-5e53-4c26-8ded-2af578a7de41\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.158140 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-api-access-brqw6" (OuterVolumeSpecName: "kube-api-access-brqw6") pod "f9aacedc-5e53-4c26-8ded-2af578a7de41" (UID: "f9aacedc-5e53-4c26-8ded-2af578a7de41"). InnerVolumeSpecName "kube-api-access-brqw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.169567 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:34066->10.217.0.199:8775: read: connection reset by peer" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.173035 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:34068->10.217.0.199:8775: read: connection reset by peer" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.187660 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f2a1-account-create-update-lhfcx"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.207148 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.245885 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brqw6\" (UniqueName: \"kubernetes.io/projected/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-api-access-brqw6\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.247163 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f2a1-account-create-update-lhfcx"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.262375 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glancef2a1-account-delete-wwplw"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.267546 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.268252 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "f9aacedc-5e53-4c26-8ded-2af578a7de41" (UID: "f9aacedc-5e53-4c26-8ded-2af578a7de41"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.273922 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "f9aacedc-5e53-4c26-8ded-2af578a7de41" (UID: "f9aacedc-5e53-4c26-8ded-2af578a7de41"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.278010 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9aacedc-5e53-4c26-8ded-2af578a7de41" (UID: "f9aacedc-5e53-4c26-8ded-2af578a7de41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.279266 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.314988 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4xrwj"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.325002 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4xrwj"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.343822 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron08e6-account-delete-rzzxh"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368265 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-scripts\") pod \"010c335b-59f4-4016-976b-ac71eaf5d14f\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368335 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-internal-tls-certs\") pod \"010c335b-59f4-4016-976b-ac71eaf5d14f\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368362 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-public-tls-certs\") pod \"010c335b-59f4-4016-976b-ac71eaf5d14f\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368451 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-config-data\") pod \"010c335b-59f4-4016-976b-ac71eaf5d14f\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368496 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/010c335b-59f4-4016-976b-ac71eaf5d14f-logs\") pod \"010c335b-59f4-4016-976b-ac71eaf5d14f\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368516 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f07c52ed-8e06-4dc1-8400-09a9dba35926-operator-scripts\") pod \"f07c52ed-8e06-4dc1-8400-09a9dba35926\" (UID: \"f07c52ed-8e06-4dc1-8400-09a9dba35926\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368565 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-combined-ca-bundle\") pod \"010c335b-59f4-4016-976b-ac71eaf5d14f\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368603 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvfv6\" (UniqueName: \"kubernetes.io/projected/010c335b-59f4-4016-976b-ac71eaf5d14f-kube-api-access-tvfv6\") pod \"010c335b-59f4-4016-976b-ac71eaf5d14f\" (UID: \"010c335b-59f4-4016-976b-ac71eaf5d14f\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368631 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg5gm\" (UniqueName: \"kubernetes.io/projected/f07c52ed-8e06-4dc1-8400-09a9dba35926-kube-api-access-tg5gm\") pod \"f07c52ed-8e06-4dc1-8400-09a9dba35926\" (UID: \"f07c52ed-8e06-4dc1-8400-09a9dba35926\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368648 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da3d6a4-5874-4305-b358-9765720b68f9-operator-scripts\") pod \"8da3d6a4-5874-4305-b358-9765720b68f9\" (UID: \"8da3d6a4-5874-4305-b358-9765720b68f9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.368692 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg5zc\" (UniqueName: \"kubernetes.io/projected/8da3d6a4-5874-4305-b358-9765720b68f9-kube-api-access-pg5zc\") pod \"8da3d6a4-5874-4305-b358-9765720b68f9\" (UID: \"8da3d6a4-5874-4305-b358-9765720b68f9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.369105 4889 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.369127 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.369136 4889 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9aacedc-5e53-4c26-8ded-2af578a7de41-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.381210 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/010c335b-59f4-4016-976b-ac71eaf5d14f-logs" (OuterVolumeSpecName: "logs") pod "010c335b-59f4-4016-976b-ac71eaf5d14f" (UID: "010c335b-59f4-4016-976b-ac71eaf5d14f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.384011 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da3d6a4-5874-4305-b358-9765720b68f9-kube-api-access-pg5zc" (OuterVolumeSpecName: "kube-api-access-pg5zc") pod "8da3d6a4-5874-4305-b358-9765720b68f9" (UID: "8da3d6a4-5874-4305-b358-9765720b68f9"). InnerVolumeSpecName "kube-api-access-pg5zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.384639 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07c52ed-8e06-4dc1-8400-09a9dba35926-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f07c52ed-8e06-4dc1-8400-09a9dba35926" (UID: "f07c52ed-8e06-4dc1-8400-09a9dba35926"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.388402 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-scripts" (OuterVolumeSpecName: "scripts") pod "010c335b-59f4-4016-976b-ac71eaf5d14f" (UID: "010c335b-59f4-4016-976b-ac71eaf5d14f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.398634 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010c335b-59f4-4016-976b-ac71eaf5d14f-kube-api-access-tvfv6" (OuterVolumeSpecName: "kube-api-access-tvfv6") pod "010c335b-59f4-4016-976b-ac71eaf5d14f" (UID: "010c335b-59f4-4016-976b-ac71eaf5d14f"). InnerVolumeSpecName "kube-api-access-tvfv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.399458 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8da3d6a4-5874-4305-b358-9765720b68f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8da3d6a4-5874-4305-b358-9765720b68f9" (UID: "8da3d6a4-5874-4305-b358-9765720b68f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.414228 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07c52ed-8e06-4dc1-8400-09a9dba35926-kube-api-access-tg5gm" (OuterVolumeSpecName: "kube-api-access-tg5gm") pod "f07c52ed-8e06-4dc1-8400-09a9dba35926" (UID: "f07c52ed-8e06-4dc1-8400-09a9dba35926"). InnerVolumeSpecName "kube-api-access-tg5gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.427382 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-08e6-account-create-update-l98s2"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.436724 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.458011 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.465470 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-08e6-account-create-update-l98s2"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.476965 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg5gm\" (UniqueName: \"kubernetes.io/projected/f07c52ed-8e06-4dc1-8400-09a9dba35926-kube-api-access-tg5gm\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.476997 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da3d6a4-5874-4305-b358-9765720b68f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.477005 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg5zc\" (UniqueName: \"kubernetes.io/projected/8da3d6a4-5874-4305-b358-9765720b68f9-kube-api-access-pg5zc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.477013 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.477022 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/010c335b-59f4-4016-976b-ac71eaf5d14f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.477030 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f07c52ed-8e06-4dc1-8400-09a9dba35926-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.477037 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvfv6\" (UniqueName: \"kubernetes.io/projected/010c335b-59f4-4016-976b-ac71eaf5d14f-kube-api-access-tvfv6\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.566417 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.578783 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data-custom\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.578825 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-public-tls-certs\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.578905 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgqnp\" (UniqueName: \"kubernetes.io/projected/c7209dbe-be81-47dd-9255-c2444debdaa9-kube-api-access-rgqnp\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.578948 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-memcached-tls-certs\") pod \"5276ecd4-549a-4a41-94be-6408535b2492\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.578992 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.579028 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-scripts\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.579101 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7209dbe-be81-47dd-9255-c2444debdaa9-logs\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.579119 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-combined-ca-bundle\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.579145 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-internal-tls-certs\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.579168 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7209dbe-be81-47dd-9255-c2444debdaa9-etc-machine-id\") pod \"c7209dbe-be81-47dd-9255-c2444debdaa9\" (UID: \"c7209dbe-be81-47dd-9255-c2444debdaa9\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.580543 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-combined-ca-bundle\") pod \"5276ecd4-549a-4a41-94be-6408535b2492\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.580624 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzx5x\" (UniqueName: \"kubernetes.io/projected/5276ecd4-549a-4a41-94be-6408535b2492-kube-api-access-tzx5x\") pod \"5276ecd4-549a-4a41-94be-6408535b2492\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.580646 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-kolla-config\") pod \"5276ecd4-549a-4a41-94be-6408535b2492\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.580666 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-config-data\") pod \"5276ecd4-549a-4a41-94be-6408535b2492\" (UID: \"5276ecd4-549a-4a41-94be-6408535b2492\") " Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.582935 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.583228 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.583275 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.583832 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-config-data" (OuterVolumeSpecName: "config-data") pod "5276ecd4-549a-4a41-94be-6408535b2492" (UID: "5276ecd4-549a-4a41-94be-6408535b2492"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.585522 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7209dbe-be81-47dd-9255-c2444debdaa9-logs" (OuterVolumeSpecName: "logs") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.587210 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209dbe-be81-47dd-9255-c2444debdaa9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.591819 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.592676 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.593020 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dlfmr" podUID="723ca26e-f925-47cc-92e3-998ff36f3e92" containerName="ovn-controller" probeResult="failure" output=< Nov 28 07:11:40 crc kubenswrapper[4889]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Nov 28 07:11:40 crc kubenswrapper[4889]: > Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.594147 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-6s8jb"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.594637 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "5276ecd4-549a-4a41-94be-6408535b2492" (UID: "5276ecd4-549a-4a41-94be-6408535b2492"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.595156 4889 generic.go:334] "Generic (PLEG): container finished" podID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerID="567f961e244cb59c92bd5c9c282ae20876453ed39721849a5bc4edf9bc1b69a8" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.595207 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56d3b5d-d634-47f9-b252-1437066f06e8","Type":"ContainerDied","Data":"567f961e244cb59c92bd5c9c282ae20876453ed39721849a5bc4edf9bc1b69a8"} Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.595267 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.595292 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.597827 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.597864 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.602770 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.602880 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="0ca42308-451d-48e1-a74f-2c7ce6c6a53a" containerName="nova-cell0-conductor-conductor" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.608525 4889 generic.go:334] "Generic (PLEG): container finished" podID="c41bad87-7181-45c9-ad09-bf49b278416d" containerID="e680db750829bfe235068d372b958d1768e839b09f9e0ae52648fe5055964fda" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.608586 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fd84fdbd8-ztpds" event={"ID":"c41bad87-7181-45c9-ad09-bf49b278416d","Type":"ContainerDied","Data":"e680db750829bfe235068d372b958d1768e839b09f9e0ae52648fe5055964fda"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.609037 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5276ecd4-549a-4a41-94be-6408535b2492-kube-api-access-tzx5x" (OuterVolumeSpecName: "kube-api-access-tzx5x") pod "5276ecd4-549a-4a41-94be-6408535b2492" (UID: "5276ecd4-549a-4a41-94be-6408535b2492"). InnerVolumeSpecName "kube-api-access-tzx5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.630662 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7209dbe-be81-47dd-9255-c2444debdaa9-kube-api-access-rgqnp" (OuterVolumeSpecName: "kube-api-access-rgqnp") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "kube-api-access-rgqnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.630838 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.632855 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-6s8jb"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.645669 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-670d-account-create-update-cxn84"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.649773 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi670d-account-delete-q5q9k"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.650005 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-scripts" (OuterVolumeSpecName: "scripts") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.656011 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-670d-account-create-update-cxn84"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.657646 4889 generic.go:334] "Generic (PLEG): container finished" podID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerID="5b791b0ee5ba22707eb669b678aaed1200bebe3fe2bc24e3b032cc3e5c25310a" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.657699 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerDied","Data":"5b791b0ee5ba22707eb669b678aaed1200bebe3fe2bc24e3b032cc3e5c25310a"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.660178 4889 generic.go:334] "Generic (PLEG): container finished" podID="5276ecd4-549a-4a41-94be-6408535b2492" containerID="49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.660219 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5276ecd4-549a-4a41-94be-6408535b2492","Type":"ContainerDied","Data":"49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.660237 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5276ecd4-549a-4a41-94be-6408535b2492","Type":"ContainerDied","Data":"1798b04903f2bd8cb51a6b9f815fc819b7eb46e53c63efd94c60f6f0c76ccf4c"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.660253 4889 scope.go:117] "RemoveContainer" containerID="49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.660369 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.670885 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.679184 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kdfkg"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.680185 4889 generic.go:334] "Generic (PLEG): container finished" podID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerID="0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.680231 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7209dbe-be81-47dd-9255-c2444debdaa9","Type":"ContainerDied","Data":"0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.680277 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7209dbe-be81-47dd-9255-c2444debdaa9","Type":"ContainerDied","Data":"656156a8a62738bcf9a70d7751c74ef4524ee3b586a58332890da05940888514"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.680294 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683690 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgqnp\" (UniqueName: \"kubernetes.io/projected/c7209dbe-be81-47dd-9255-c2444debdaa9-kube-api-access-rgqnp\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683720 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683730 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7209dbe-be81-47dd-9255-c2444debdaa9-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683738 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683747 4889 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7209dbe-be81-47dd-9255-c2444debdaa9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683756 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzx5x\" (UniqueName: \"kubernetes.io/projected/5276ecd4-549a-4a41-94be-6408535b2492-kube-api-access-tzx5x\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683764 4889 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683773 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5276ecd4-549a-4a41-94be-6408535b2492-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.683780 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.686516 4889 generic.go:334] "Generic (PLEG): container finished" podID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerID="49402cf027d11e8e350b29757338f93c7461291da6a3603e125d8fc9821c3652" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.686621 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ed215c-b8d0-43fb-85bd-8531e5acf609","Type":"ContainerDied","Data":"49402cf027d11e8e350b29757338f93c7461291da6a3603e125d8fc9821c3652"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.688618 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder6a1b-account-delete-bdn66" event={"ID":"f07c52ed-8e06-4dc1-8400-09a9dba35926","Type":"ContainerDied","Data":"ccef28d07ac74be76db68682d8aa2b359ad81156f82d5f979cd6e395ef588cae"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.688642 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccef28d07ac74be76db68682d8aa2b359ad81156f82d5f979cd6e395ef588cae" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.688689 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder6a1b-account-delete-bdn66" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.688986 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5276ecd4-549a-4a41-94be-6408535b2492" (UID: "5276ecd4-549a-4a41-94be-6408535b2492"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.689232 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kdfkg"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.695263 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "010c335b-59f4-4016-976b-ac71eaf5d14f" (UID: "010c335b-59f4-4016-976b-ac71eaf5d14f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.696112 4889 generic.go:334] "Generic (PLEG): container finished" podID="4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d" containerID="1557253848b8173208cc1bcb66293e44e9523ff9fbd1b1b06ff1d6db2d81cb11" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.696176 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron08e6-account-delete-rzzxh" event={"ID":"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d","Type":"ContainerDied","Data":"1557253848b8173208cc1bcb66293e44e9523ff9fbd1b1b06ff1d6db2d81cb11"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.699599 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement75e4-account-delete-x6dpp" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.699599 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement75e4-account-delete-x6dpp" event={"ID":"8da3d6a4-5874-4305-b358-9765720b68f9","Type":"ContainerDied","Data":"a66348010096f93673a1cfa1653322bdd71b6b74a351bd0c0b4ba5b6b84972cf"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.711014 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a66348010096f93673a1cfa1653322bdd71b6b74a351bd0c0b4ba5b6b84972cf" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.711031 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9aacedc-5e53-4c26-8ded-2af578a7de41","Type":"ContainerDied","Data":"35cf157289eb6462ec06219ddc15c2733a617de52d034292bef59910991ae297"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.708757 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.715964 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-config-data" (OuterVolumeSpecName: "config-data") pod "010c335b-59f4-4016-976b-ac71eaf5d14f" (UID: "010c335b-59f4-4016-976b-ac71eaf5d14f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.718524 4889 generic.go:334] "Generic (PLEG): container finished" podID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerID="acb5766d5a413069d801db205a476d18781a4f594a9ce2359a0ea46664f3fc6f" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.718589 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660e4f27-4ee4-43d9-b155-7132c78e9a21","Type":"ContainerDied","Data":"acb5766d5a413069d801db205a476d18781a4f594a9ce2359a0ea46664f3fc6f"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.722434 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3d10-account-create-update-r6pwt"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.723383 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bbc5ddd4-vzclt" event={"ID":"010c335b-59f4-4016-976b-ac71eaf5d14f","Type":"ContainerDied","Data":"06a004a00acde25a83c3ee146f03bec54da5eaf6b1e6cd5ff431b416bcef4b1d"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.723398 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bbc5ddd4-vzclt" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.729087 4889 generic.go:334] "Generic (PLEG): container finished" podID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerID="cff416d0a45fbb92ec6800489afd9ccbad8dbac624f5bfcda44035e9258fc559" exitCode=0 Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.729252 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae","Type":"ContainerDied","Data":"cff416d0a45fbb92ec6800489afd9ccbad8dbac624f5bfcda44035e9258fc559"} Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.735313 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "010c335b-59f4-4016-976b-ac71eaf5d14f" (UID: "010c335b-59f4-4016-976b-ac71eaf5d14f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.766456 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data" (OuterVolumeSpecName: "config-data") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.771192 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell03d10-account-delete-vhnfs"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.785806 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3d10-account-create-update-r6pwt"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.849838 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.850131 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.850141 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.850149 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.850157 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.859767 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.869848 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7209dbe-be81-47dd-9255-c2444debdaa9" (UID: "c7209dbe-be81-47dd-9255-c2444debdaa9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.883025 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "5276ecd4-549a-4a41-94be-6408535b2492" (UID: "5276ecd4-549a-4a41-94be-6408535b2492"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.884339 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cb72c62f8cc63262a8e708afde0ce2707f137cbcd34fac8af65e4f38de5d9324" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.886450 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "010c335b-59f4-4016-976b-ac71eaf5d14f" (UID: "010c335b-59f4-4016-976b-ac71eaf5d14f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.888187 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cb72c62f8cc63262a8e708afde0ce2707f137cbcd34fac8af65e4f38de5d9324" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.894366 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cb72c62f8cc63262a8e708afde0ce2707f137cbcd34fac8af65e4f38de5d9324" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.894416 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="bf4ff6f2-105e-4f62-be58-3054d0a54fed" containerName="nova-cell1-conductor-conductor" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.953980 4889 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5276ecd4-549a-4a41-94be-6408535b2492-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.954019 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.954031 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/010c335b-59f4-4016-976b-ac71eaf5d14f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: I1128 07:11:40.954044 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7209dbe-be81-47dd-9255-c2444debdaa9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:40 crc kubenswrapper[4889]: E1128 07:11:40.958409 4889 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd29dfd27_459d_4ade_8119_3c84095d0b1b.slice/crio-bc963ae674cb642cf73feedb96f166caf22e14565033105ea01efe00c81d6de0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd29dfd27_459d_4ade_8119_3c84095d0b1b.slice/crio-conmon-bc963ae674cb642cf73feedb96f166caf22e14565033105ea01efe00c81d6de0.scope\": RecentStats: unable to find data in memory cache]" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.260299 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.293669 4889 scope.go:117] "RemoveContainer" containerID="49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409" Nov 28 07:11:41 crc kubenswrapper[4889]: E1128 07:11:41.299232 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409\": container with ID starting with 49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409 not found: ID does not exist" containerID="49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.299279 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409"} err="failed to get container status \"49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409\": rpc error: code = NotFound desc = could not find container \"49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409\": container with ID starting with 49c2a92409401f0e73262951866b742c8c7aaa0cdd864e373a1b8eda84884409 not found: ID does not exist" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.299308 4889 scope.go:117] "RemoveContainer" containerID="0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.304637 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.310317 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.326293 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.327571 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.344430 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c6935e-abd3-4da3-aa74-662c45289641" path="/var/lib/kubelet/pods/07c6935e-abd3-4da3-aa74-662c45289641/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.345289 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1382a8ac-8448-45ed-8bd1-74426b1aa746" path="/var/lib/kubelet/pods/1382a8ac-8448-45ed-8bd1-74426b1aa746/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.345996 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf9f8ba-742e-483a-bbd9-9474dc0bb17e" path="/var/lib/kubelet/pods/2cf9f8ba-742e-483a-bbd9-9474dc0bb17e/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.346631 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff3fdda-c81d-4b71-967a-d482454d5e3e" path="/var/lib/kubelet/pods/3ff3fdda-c81d-4b71-967a-d482454d5e3e/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.347812 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53b94446-0a24-4eaa-ab88-62168ad8c7b7" path="/var/lib/kubelet/pods/53b94446-0a24-4eaa-ab88-62168ad8c7b7/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.349027 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e500e0f-f629-484f-b1c5-2e1b254bcee4" path="/var/lib/kubelet/pods/5e500e0f-f629-484f-b1c5-2e1b254bcee4/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.349615 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66cdccd7-317a-47fb-a7e1-06ac1924af9c" path="/var/lib/kubelet/pods/66cdccd7-317a-47fb-a7e1-06ac1924af9c/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.350190 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74cd0e57-f855-4399-b02c-a8740d0e31b7" path="/var/lib/kubelet/pods/74cd0e57-f855-4399-b02c-a8740d0e31b7/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.351305 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aee8230-fc0c-4f50-a805-23331b345013" path="/var/lib/kubelet/pods/7aee8230-fc0c-4f50-a805-23331b345013/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.352278 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" path="/var/lib/kubelet/pods/8cff4827-368d-4e19-beb0-b22b71032f26/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.352934 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac655aad-d9b7-47d0-b4b6-5f8904f5b925" path="/var/lib/kubelet/pods/ac655aad-d9b7-47d0-b4b6-5f8904f5b925/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.354005 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2b78a6-88b2-4a68-86e4-a5e07ac24456" path="/var/lib/kubelet/pods/bc2b78a6-88b2-4a68-86e4-a5e07ac24456/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.354583 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f66c57-362a-437d-a4b4-e2c3bf045890" path="/var/lib/kubelet/pods/c5f66c57-362a-437d-a4b4-e2c3bf045890/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.355145 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8952181-2de9-4a32-8c71-49e044d03333" path="/var/lib/kubelet/pods/c8952181-2de9-4a32-8c71-49e044d03333/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.356186 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d578f2c7-2fee-4032-b63e-0dc8e5d1371f" path="/var/lib/kubelet/pods/d578f2c7-2fee-4032-b63e-0dc8e5d1371f/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.356817 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e88a1b-3b19-46b8-b880-7b342640a8f2" path="/var/lib/kubelet/pods/e7e88a1b-3b19-46b8-b880-7b342640a8f2/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.357466 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf7fcae-8493-4333-96c4-d4692a144187" path="/var/lib/kubelet/pods/ecf7fcae-8493-4333-96c4-d4692a144187/volumes" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.369718 4889 scope.go:117] "RemoveContainer" containerID="5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372336 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5tt7\" (UniqueName: \"kubernetes.io/projected/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-kube-api-access-p5tt7\") pod \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372379 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-public-tls-certs\") pod \"660e4f27-4ee4-43d9-b155-7132c78e9a21\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372428 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-combined-ca-bundle\") pod \"c56d3b5d-d634-47f9-b252-1437066f06e8\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372448 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-config-data\") pod \"660e4f27-4ee4-43d9-b155-7132c78e9a21\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372479 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-579d4\" (UniqueName: \"kubernetes.io/projected/c56d3b5d-d634-47f9-b252-1437066f06e8-kube-api-access-579d4\") pod \"c56d3b5d-d634-47f9-b252-1437066f06e8\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372521 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49j9h\" (UniqueName: \"kubernetes.io/projected/660e4f27-4ee4-43d9-b155-7132c78e9a21-kube-api-access-49j9h\") pod \"660e4f27-4ee4-43d9-b155-7132c78e9a21\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372548 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-public-tls-certs\") pod \"30ed215c-b8d0-43fb-85bd-8531e5acf609\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372568 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-internal-tls-certs\") pod \"660e4f27-4ee4-43d9-b155-7132c78e9a21\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372644 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-config-data\") pod \"30ed215c-b8d0-43fb-85bd-8531e5acf609\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372674 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwpsz\" (UniqueName: \"kubernetes.io/projected/c41bad87-7181-45c9-ad09-bf49b278416d-kube-api-access-zwpsz\") pod \"c41bad87-7181-45c9-ad09-bf49b278416d\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372731 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"30ed215c-b8d0-43fb-85bd-8531e5acf609\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372763 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-config-data\") pod \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372795 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-combined-ca-bundle\") pod \"c41bad87-7181-45c9-ad09-bf49b278416d\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372828 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data-custom\") pod \"c41bad87-7181-45c9-ad09-bf49b278416d\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372854 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-combined-ca-bundle\") pod \"30ed215c-b8d0-43fb-85bd-8531e5acf609\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372890 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-httpd-run\") pod \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372912 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41bad87-7181-45c9-ad09-bf49b278416d-logs\") pod \"c41bad87-7181-45c9-ad09-bf49b278416d\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.372991 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-combined-ca-bundle\") pod \"660e4f27-4ee4-43d9-b155-7132c78e9a21\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373016 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnlcd\" (UniqueName: \"kubernetes.io/projected/30ed215c-b8d0-43fb-85bd-8531e5acf609-kube-api-access-cnlcd\") pod \"30ed215c-b8d0-43fb-85bd-8531e5acf609\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373033 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-combined-ca-bundle\") pod \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373056 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-internal-tls-certs\") pod \"c41bad87-7181-45c9-ad09-bf49b278416d\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373074 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-internal-tls-certs\") pod \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373109 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-scripts\") pod \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373131 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-logs\") pod \"30ed215c-b8d0-43fb-85bd-8531e5acf609\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373154 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56d3b5d-d634-47f9-b252-1437066f06e8-logs\") pod \"c56d3b5d-d634-47f9-b252-1437066f06e8\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373176 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373194 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-nova-metadata-tls-certs\") pod \"c56d3b5d-d634-47f9-b252-1437066f06e8\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373214 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-config-data\") pod \"c56d3b5d-d634-47f9-b252-1437066f06e8\" (UID: \"c56d3b5d-d634-47f9-b252-1437066f06e8\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373234 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-httpd-run\") pod \"30ed215c-b8d0-43fb-85bd-8531e5acf609\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373255 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-scripts\") pod \"30ed215c-b8d0-43fb-85bd-8531e5acf609\" (UID: \"30ed215c-b8d0-43fb-85bd-8531e5acf609\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373278 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-logs\") pod \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\" (UID: \"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373292 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-public-tls-certs\") pod \"c41bad87-7181-45c9-ad09-bf49b278416d\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373315 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data\") pod \"c41bad87-7181-45c9-ad09-bf49b278416d\" (UID: \"c41bad87-7181-45c9-ad09-bf49b278416d\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.373351 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660e4f27-4ee4-43d9-b155-7132c78e9a21-logs\") pod \"660e4f27-4ee4-43d9-b155-7132c78e9a21\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.388479 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56d3b5d-d634-47f9-b252-1437066f06e8-kube-api-access-579d4" (OuterVolumeSpecName: "kube-api-access-579d4") pod "c56d3b5d-d634-47f9-b252-1437066f06e8" (UID: "c56d3b5d-d634-47f9-b252-1437066f06e8"). InnerVolumeSpecName "kube-api-access-579d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.408075 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-logs" (OuterVolumeSpecName: "logs") pod "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" (UID: "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.411915 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660e4f27-4ee4-43d9-b155-7132c78e9a21-logs" (OuterVolumeSpecName: "logs") pod "660e4f27-4ee4-43d9-b155-7132c78e9a21" (UID: "660e4f27-4ee4-43d9-b155-7132c78e9a21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.419200 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" (UID: "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.419626 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41bad87-7181-45c9-ad09-bf49b278416d-logs" (OuterVolumeSpecName: "logs") pod "c41bad87-7181-45c9-ad09-bf49b278416d" (UID: "c41bad87-7181-45c9-ad09-bf49b278416d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.422079 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-logs" (OuterVolumeSpecName: "logs") pod "30ed215c-b8d0-43fb-85bd-8531e5acf609" (UID: "30ed215c-b8d0-43fb-85bd-8531e5acf609"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.422369 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "30ed215c-b8d0-43fb-85bd-8531e5acf609" (UID: "30ed215c-b8d0-43fb-85bd-8531e5acf609"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.422934 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c56d3b5d-d634-47f9-b252-1437066f06e8-logs" (OuterVolumeSpecName: "logs") pod "c56d3b5d-d634-47f9-b252-1437066f06e8" (UID: "c56d3b5d-d634-47f9-b252-1437066f06e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.423589 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.423619 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.423644 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.423674 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.430244 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-kube-api-access-p5tt7" (OuterVolumeSpecName: "kube-api-access-p5tt7") pod "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" (UID: "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae"). InnerVolumeSpecName "kube-api-access-p5tt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.430542 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.441825 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c41bad87-7181-45c9-ad09-bf49b278416d" (UID: "c41bad87-7181-45c9-ad09-bf49b278416d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.456751 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.477605 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder6a1b-account-delete-bdn66"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.477990 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" (UID: "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478170 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ed215c-b8d0-43fb-85bd-8531e5acf609-kube-api-access-cnlcd" (OuterVolumeSpecName: "kube-api-access-cnlcd") pod "30ed215c-b8d0-43fb-85bd-8531e5acf609" (UID: "30ed215c-b8d0-43fb-85bd-8531e5acf609"). InnerVolumeSpecName "kube-api-access-cnlcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478497 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-scripts" (OuterVolumeSpecName: "scripts") pod "30ed215c-b8d0-43fb-85bd-8531e5acf609" (UID: "30ed215c-b8d0-43fb-85bd-8531e5acf609"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478580 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478597 4889 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478606 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41bad87-7181-45c9-ad09-bf49b278416d-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478615 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnlcd\" (UniqueName: \"kubernetes.io/projected/30ed215c-b8d0-43fb-85bd-8531e5acf609-kube-api-access-cnlcd\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478627 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478636 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56d3b5d-d634-47f9-b252-1437066f06e8-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478661 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478669 4889 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ed215c-b8d0-43fb-85bd-8531e5acf609-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478678 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478687 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660e4f27-4ee4-43d9-b155-7132c78e9a21-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478695 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5tt7\" (UniqueName: \"kubernetes.io/projected/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-kube-api-access-p5tt7\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478719 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-579d4\" (UniqueName: \"kubernetes.io/projected/c56d3b5d-d634-47f9-b252-1437066f06e8-kube-api-access-579d4\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478610 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41bad87-7181-45c9-ad09-bf49b278416d-kube-api-access-zwpsz" (OuterVolumeSpecName: "kube-api-access-zwpsz") pod "c41bad87-7181-45c9-ad09-bf49b278416d" (UID: "c41bad87-7181-45c9-ad09-bf49b278416d"). InnerVolumeSpecName "kube-api-access-zwpsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478667 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660e4f27-4ee4-43d9-b155-7132c78e9a21-kube-api-access-49j9h" (OuterVolumeSpecName: "kube-api-access-49j9h") pod "660e4f27-4ee4-43d9-b155-7132c78e9a21" (UID: "660e4f27-4ee4-43d9-b155-7132c78e9a21"). InnerVolumeSpecName "kube-api-access-49j9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.478289 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "30ed215c-b8d0-43fb-85bd-8531e5acf609" (UID: "30ed215c-b8d0-43fb-85bd-8531e5acf609"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.481318 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-scripts" (OuterVolumeSpecName: "scripts") pod "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" (UID: "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.481624 4889 scope.go:117] "RemoveContainer" containerID="0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04" Nov 28 07:11:41 crc kubenswrapper[4889]: E1128 07:11:41.488308 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04\": container with ID starting with 0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04 not found: ID does not exist" containerID="0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.488365 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04"} err="failed to get container status \"0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04\": rpc error: code = NotFound desc = could not find container \"0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04\": container with ID starting with 0d5e69ce4c3c56502bc09cd56e4b011ed243418f7413b8aaff108e1c40b0dc04 not found: ID does not exist" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.488401 4889 scope.go:117] "RemoveContainer" containerID="5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d" Nov 28 07:11:41 crc kubenswrapper[4889]: E1128 07:11:41.489423 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d\": container with ID starting with 5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d not found: ID does not exist" containerID="5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.489458 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d"} err="failed to get container status \"5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d\": rpc error: code = NotFound desc = could not find container \"5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d\": container with ID starting with 5ddcb76b6abbb99057ce9920ce93166380becf95f21b27ab6a90f8179f6e647d not found: ID does not exist" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.489479 4889 scope.go:117] "RemoveContainer" containerID="cfce3bc5d6f0828a73170fd49a5f64b6f79394b204fb2e4a2576389017af7153" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.506199 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-config-data" (OuterVolumeSpecName: "config-data") pod "660e4f27-4ee4-43d9-b155-7132c78e9a21" (UID: "660e4f27-4ee4-43d9-b155-7132c78e9a21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.511382 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder6a1b-account-delete-bdn66"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.525872 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.525947 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5bbc5ddd4-vzclt"] Nov 28 07:11:41 crc kubenswrapper[4889]: E1128 07:11:41.527873 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743 is running failed: container process not found" containerID="288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:11:41 crc kubenswrapper[4889]: E1128 07:11:41.528605 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743 is running failed: container process not found" containerID="288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:11:41 crc kubenswrapper[4889]: E1128 07:11:41.533170 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743 is running failed: container process not found" containerID="288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:11:41 crc kubenswrapper[4889]: E1128 07:11:41.533228 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="22942b26-7d2f-4a77-9d97-b7bd457dcfe7" containerName="nova-scheduler-scheduler" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.535014 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5bbc5ddd4-vzclt"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.548859 4889 scope.go:117] "RemoveContainer" containerID="916841af475c0d0409c239e605ccdb71c123e2852a495b97c814602f89fea785" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.553188 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement75e4-account-delete-x6dpp"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.566859 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c56d3b5d-d634-47f9-b252-1437066f06e8" (UID: "c56d3b5d-d634-47f9-b252-1437066f06e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.572104 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement75e4-account-delete-x6dpp"] Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.589088 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.589118 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.589128 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.589136 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.589146 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.589154 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49j9h\" (UniqueName: \"kubernetes.io/projected/660e4f27-4ee4-43d9-b155-7132c78e9a21-kube-api-access-49j9h\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.589163 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwpsz\" (UniqueName: \"kubernetes.io/projected/c41bad87-7181-45c9-ad09-bf49b278416d-kube-api-access-zwpsz\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.589183 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.612822 4889 scope.go:117] "RemoveContainer" containerID="ff5c205f4bf58cd1d0ad31c563376d2141a5b307862a95d33a353065a03c5642" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.680299 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-config-data" (OuterVolumeSpecName: "config-data") pod "c56d3b5d-d634-47f9-b252-1437066f06e8" (UID: "c56d3b5d-d634-47f9-b252-1437066f06e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.689995 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "660e4f27-4ee4-43d9-b155-7132c78e9a21" (UID: "660e4f27-4ee4-43d9-b155-7132c78e9a21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.690384 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-combined-ca-bundle\") pod \"660e4f27-4ee4-43d9-b155-7132c78e9a21\" (UID: \"660e4f27-4ee4-43d9-b155-7132c78e9a21\") " Nov 28 07:11:41 crc kubenswrapper[4889]: W1128 07:11:41.690542 4889 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/660e4f27-4ee4-43d9-b155-7132c78e9a21/volumes/kubernetes.io~secret/combined-ca-bundle Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.690573 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "660e4f27-4ee4-43d9-b155-7132c78e9a21" (UID: "660e4f27-4ee4-43d9-b155-7132c78e9a21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.690762 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.690779 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.722298 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.723723 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.724524 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.758203 4889 generic.go:334] "Generic (PLEG): container finished" podID="bf4ff6f2-105e-4f62-be58-3054d0a54fed" containerID="cb72c62f8cc63262a8e708afde0ce2707f137cbcd34fac8af65e4f38de5d9324" exitCode=0 Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.758620 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bf4ff6f2-105e-4f62-be58-3054d0a54fed","Type":"ContainerDied","Data":"cb72c62f8cc63262a8e708afde0ce2707f137cbcd34fac8af65e4f38de5d9324"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.758769 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c41bad87-7181-45c9-ad09-bf49b278416d" (UID: "c41bad87-7181-45c9-ad09-bf49b278416d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.768528 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-config-data" (OuterVolumeSpecName: "config-data") pod "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" (UID: "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.769183 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660e4f27-4ee4-43d9-b155-7132c78e9a21","Type":"ContainerDied","Data":"fbe58493cac7473b311c7cdb030d60f9f192f868912c8e5fe7f69056cf48079c"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.769238 4889 scope.go:117] "RemoveContainer" containerID="acb5766d5a413069d801db205a476d18781a4f594a9ce2359a0ea46664f3fc6f" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.769354 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.773196 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi670d-account-delete-q5q9k" event={"ID":"00c7d31d-27e7-45cc-abb6-bae21de9135f","Type":"ContainerDied","Data":"698e96c7a3d29e1670c0cf8b2281b42f1c0d44611909a931b635d677935e5a02"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.773219 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="698e96c7a3d29e1670c0cf8b2281b42f1c0d44611909a931b635d677935e5a02" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.775046 4889 generic.go:334] "Generic (PLEG): container finished" podID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerID="bc963ae674cb642cf73feedb96f166caf22e14565033105ea01efe00c81d6de0" exitCode=0 Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.775085 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" event={"ID":"d29dfd27-459d-4ade-8119-3c84095d0b1b","Type":"ContainerDied","Data":"bc963ae674cb642cf73feedb96f166caf22e14565033105ea01efe00c81d6de0"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.775100 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" event={"ID":"d29dfd27-459d-4ade-8119-3c84095d0b1b","Type":"ContainerDied","Data":"1eb848546f13708da29a1b1e0adc2d1fb9e0b24f303d6da0cc02984c629faa86"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.775110 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb848546f13708da29a1b1e0adc2d1fb9e0b24f303d6da0cc02984c629faa86" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.776407 4889 generic.go:334] "Generic (PLEG): container finished" podID="741842f5-b565-43c8-bd99-eb15782fcf18" containerID="ebd8b75f47303d72ac1c1453cd80c63707ba3cde640979a9031b535215433325" exitCode=0 Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.776446 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dcb6998f-sb4k2" event={"ID":"741842f5-b565-43c8-bd99-eb15782fcf18","Type":"ContainerDied","Data":"ebd8b75f47303d72ac1c1453cd80c63707ba3cde640979a9031b535215433325"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.777273 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron08e6-account-delete-rzzxh" event={"ID":"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d","Type":"ContainerDied","Data":"023b56bbd7654a9dd84d169850cac13869a43fb115ce30e6f66f88f81571cb5d"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.777293 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023b56bbd7654a9dd84d169850cac13869a43fb115ce30e6f66f88f81571cb5d" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.779411 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican9d4e-account-delete-w2cq4" event={"ID":"5b9c3bd5-587a-40cb-b489-764fd5f98ca0","Type":"ContainerDied","Data":"0cd213f74142ef4a1218558a759dd8f4bb5aceda309f20fbc7fcb0ff54186285"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.779433 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd213f74142ef4a1218558a759dd8f4bb5aceda309f20fbc7fcb0ff54186285" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.785021 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "30ed215c-b8d0-43fb-85bd-8531e5acf609" (UID: "30ed215c-b8d0-43fb-85bd-8531e5acf609"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.792393 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpvz7\" (UniqueName: \"kubernetes.io/projected/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-kube-api-access-bpvz7\") pod \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.792476 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt9cj\" (UniqueName: \"kubernetes.io/projected/32d7e485-1911-4206-bf42-9a57a855a880-kube-api-access-lt9cj\") pod \"32d7e485-1911-4206-bf42-9a57a855a880\" (UID: \"32d7e485-1911-4206-bf42-9a57a855a880\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.792537 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-config-data\") pod \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.792894 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-combined-ca-bundle\") pod \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\" (UID: \"0ca42308-451d-48e1-a74f-2c7ce6c6a53a\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.792967 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d7e485-1911-4206-bf42-9a57a855a880-operator-scripts\") pod \"32d7e485-1911-4206-bf42-9a57a855a880\" (UID: \"32d7e485-1911-4206-bf42-9a57a855a880\") " Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.794196 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.794218 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.794229 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.794240 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.795095 4889 generic.go:334] "Generic (PLEG): container finished" podID="22942b26-7d2f-4a77-9d97-b7bd457dcfe7" containerID="288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743" exitCode=0 Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.795809 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22942b26-7d2f-4a77-9d97-b7bd457dcfe7","Type":"ContainerDied","Data":"288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.795853 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22942b26-7d2f-4a77-9d97-b7bd457dcfe7","Type":"ContainerDied","Data":"a44fc71b7cbae1df53405bd997c2a412e2c915a79d9f715611c9ef9c5556ff53"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.795866 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a44fc71b7cbae1df53405bd997c2a412e2c915a79d9f715611c9ef9c5556ff53" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.795929 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d7e485-1911-4206-bf42-9a57a855a880-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32d7e485-1911-4206-bf42-9a57a855a880" (UID: "32d7e485-1911-4206-bf42-9a57a855a880"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.797885 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-kube-api-access-bpvz7" (OuterVolumeSpecName: "kube-api-access-bpvz7") pod "0ca42308-451d-48e1-a74f-2c7ce6c6a53a" (UID: "0ca42308-451d-48e1-a74f-2c7ce6c6a53a"). InnerVolumeSpecName "kube-api-access-bpvz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.799877 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae","Type":"ContainerDied","Data":"3b49c28367a3d160df8ad542ed4d61e0381a18416a147f8382550cc5b290a67e"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.799982 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.809087 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56d3b5d-d634-47f9-b252-1437066f06e8","Type":"ContainerDied","Data":"70a438098fb583b36a3abbf2efb8b6bae09e688802ef7192c39ddcd0168358cb"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.809145 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.809719 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d7e485-1911-4206-bf42-9a57a855a880-kube-api-access-lt9cj" (OuterVolumeSpecName: "kube-api-access-lt9cj") pod "32d7e485-1911-4206-bf42-9a57a855a880" (UID: "32d7e485-1911-4206-bf42-9a57a855a880"). InnerVolumeSpecName "kube-api-access-lt9cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.810579 4889 generic.go:334] "Generic (PLEG): container finished" podID="0ca42308-451d-48e1-a74f-2c7ce6c6a53a" containerID="0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10" exitCode=0 Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.810649 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0ca42308-451d-48e1-a74f-2c7ce6c6a53a","Type":"ContainerDied","Data":"0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.810677 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0ca42308-451d-48e1-a74f-2c7ce6c6a53a","Type":"ContainerDied","Data":"ceb2590fecbc9729a9dc8fe3d73ce534db21c586e40793159e8f616d4103580d"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.812198 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.818253 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fd84fdbd8-ztpds" event={"ID":"c41bad87-7181-45c9-ad09-bf49b278416d","Type":"ContainerDied","Data":"9fdc68e6e823526c2abd5a125cf23988589aa37b2d2343288d601ff0dae6381f"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.818377 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fd84fdbd8-ztpds" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.821504 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c41bad87-7181-45c9-ad09-bf49b278416d" (UID: "c41bad87-7181-45c9-ad09-bf49b278416d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.822410 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.822584 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ed215c-b8d0-43fb-85bd-8531e5acf609","Type":"ContainerDied","Data":"f034b372d68ec7a99b3a4374bb66584680c94e8d9453e90d16b2255971e39203"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.832637 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "660e4f27-4ee4-43d9-b155-7132c78e9a21" (UID: "660e4f27-4ee4-43d9-b155-7132c78e9a21"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.832650 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30ed215c-b8d0-43fb-85bd-8531e5acf609" (UID: "30ed215c-b8d0-43fb-85bd-8531e5acf609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.839655 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancef2a1-account-delete-wwplw" event={"ID":"32d7e485-1911-4206-bf42-9a57a855a880","Type":"ContainerDied","Data":"13aa4d1181851fe06902bc52a954dd70b2a6c609df51ae6f31dd02eff6f0ff68"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.839717 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13aa4d1181851fe06902bc52a954dd70b2a6c609df51ae6f31dd02eff6f0ff68" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.839794 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glancef2a1-account-delete-wwplw" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.854258 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell03d10-account-delete-vhnfs" event={"ID":"fe87e12e-e732-4a38-b9bc-0e6000da9bd8","Type":"ContainerDied","Data":"0166b6ea79f4750dee5e9324971f74bd91d64abe3a07dbd6c925e551b04a9c45"} Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.854309 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0166b6ea79f4750dee5e9324971f74bd91d64abe3a07dbd6c925e551b04a9c45" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.861749 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ca42308-451d-48e1-a74f-2c7ce6c6a53a" (UID: "0ca42308-451d-48e1-a74f-2c7ce6c6a53a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.875544 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" (UID: "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.876938 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data" (OuterVolumeSpecName: "config-data") pod "c41bad87-7181-45c9-ad09-bf49b278416d" (UID: "c41bad87-7181-45c9-ad09-bf49b278416d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.879452 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "660e4f27-4ee4-43d9-b155-7132c78e9a21" (UID: "660e4f27-4ee4-43d9-b155-7132c78e9a21"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.882131 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c41bad87-7181-45c9-ad09-bf49b278416d" (UID: "c41bad87-7181-45c9-ad09-bf49b278416d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.887642 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" (UID: "bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895698 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895741 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895754 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895763 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895773 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d7e485-1911-4206-bf42-9a57a855a880-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895781 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895789 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895797 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895806 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpvz7\" (UniqueName: \"kubernetes.io/projected/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-kube-api-access-bpvz7\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895815 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41bad87-7181-45c9-ad09-bf49b278416d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895823 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660e4f27-4ee4-43d9-b155-7132c78e9a21-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.895831 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt9cj\" (UniqueName: \"kubernetes.io/projected/32d7e485-1911-4206-bf42-9a57a855a880-kube-api-access-lt9cj\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.903105 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c56d3b5d-d634-47f9-b252-1437066f06e8" (UID: "c56d3b5d-d634-47f9-b252-1437066f06e8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.905817 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-config-data" (OuterVolumeSpecName: "config-data") pod "30ed215c-b8d0-43fb-85bd-8531e5acf609" (UID: "30ed215c-b8d0-43fb-85bd-8531e5acf609"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.977956 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-config-data" (OuterVolumeSpecName: "config-data") pod "0ca42308-451d-48e1-a74f-2c7ce6c6a53a" (UID: "0ca42308-451d-48e1-a74f-2c7ce6c6a53a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.997349 4889 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d3b5d-d634-47f9-b252-1437066f06e8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.997387 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca42308-451d-48e1-a74f-2c7ce6c6a53a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:41 crc kubenswrapper[4889]: I1128 07:11:41.997398 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ed215c-b8d0-43fb-85bd-8531e5acf609-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.022091 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.041322 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.042195 4889 scope.go:117] "RemoveContainer" containerID="aeb659f950bddd00ce66f14e3cdebf2bdbc3d4975bbd35d09c2685e724f6146c" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.057139 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glancef2a1-account-delete-wwplw"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.058119 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.075506 4889 scope.go:117] "RemoveContainer" containerID="cff416d0a45fbb92ec6800489afd9ccbad8dbac624f5bfcda44035e9258fc559" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.076952 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glancef2a1-account-delete-wwplw"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.095196 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.098651 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b4p6\" (UniqueName: \"kubernetes.io/projected/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-kube-api-access-8b4p6\") pod \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\" (UID: \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.098818 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-operator-scripts\") pod \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\" (UID: \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.100260 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-combined-ca-bundle\") pod \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.100536 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnswm\" (UniqueName: \"kubernetes.io/projected/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-kube-api-access-fnswm\") pod \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\" (UID: \"5b9c3bd5-587a-40cb-b489-764fd5f98ca0\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.100558 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4kx8\" (UniqueName: \"kubernetes.io/projected/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-kube-api-access-g4kx8\") pod \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.100622 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-operator-scripts\") pod \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\" (UID: \"4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.100661 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-config-data\") pod \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\" (UID: \"22942b26-7d2f-4a77-9d97-b7bd457dcfe7\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.102119 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b9c3bd5-587a-40cb-b489-764fd5f98ca0" (UID: "5b9c3bd5-587a-40cb-b489-764fd5f98ca0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.103574 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d" (UID: "4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.106037 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-kube-api-access-fnswm" (OuterVolumeSpecName: "kube-api-access-fnswm") pod "5b9c3bd5-587a-40cb-b489-764fd5f98ca0" (UID: "5b9c3bd5-587a-40cb-b489-764fd5f98ca0"). InnerVolumeSpecName "kube-api-access-fnswm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.107971 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-kube-api-access-g4kx8" (OuterVolumeSpecName: "kube-api-access-g4kx8") pod "22942b26-7d2f-4a77-9d97-b7bd457dcfe7" (UID: "22942b26-7d2f-4a77-9d97-b7bd457dcfe7"). InnerVolumeSpecName "kube-api-access-g4kx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.128861 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22942b26-7d2f-4a77-9d97-b7bd457dcfe7" (UID: "22942b26-7d2f-4a77-9d97-b7bd457dcfe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.140096 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-config-data" (OuterVolumeSpecName: "config-data") pod "22942b26-7d2f-4a77-9d97-b7bd457dcfe7" (UID: "22942b26-7d2f-4a77-9d97-b7bd457dcfe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.159157 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-kube-api-access-8b4p6" (OuterVolumeSpecName: "kube-api-access-8b4p6") pod "4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d" (UID: "4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d"). InnerVolumeSpecName "kube-api-access-8b4p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.177778 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.179027 4889 scope.go:117] "RemoveContainer" containerID="22318eb16b34523322d3a94ac17704c1b438f84bf7f28f3ecaa09dfd78e54966" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.192248 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202095 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data\") pod \"d29dfd27-459d-4ade-8119-3c84095d0b1b\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202188 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data-custom\") pod \"d29dfd27-459d-4ade-8119-3c84095d0b1b\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202217 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-combined-ca-bundle\") pod \"d29dfd27-459d-4ade-8119-3c84095d0b1b\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202313 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-operator-scripts\") pod \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\" (UID: \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202334 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6qhh\" (UniqueName: \"kubernetes.io/projected/d29dfd27-459d-4ade-8119-3c84095d0b1b-kube-api-access-b6qhh\") pod \"d29dfd27-459d-4ade-8119-3c84095d0b1b\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202393 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w76kg\" (UniqueName: \"kubernetes.io/projected/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-kube-api-access-w76kg\") pod \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\" (UID: \"fe87e12e-e732-4a38-b9bc-0e6000da9bd8\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202418 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d29dfd27-459d-4ade-8119-3c84095d0b1b-logs\") pod \"d29dfd27-459d-4ade-8119-3c84095d0b1b\" (UID: \"d29dfd27-459d-4ade-8119-3c84095d0b1b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202971 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202987 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.202995 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b4p6\" (UniqueName: \"kubernetes.io/projected/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d-kube-api-access-8b4p6\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.203007 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.203015 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.203023 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnswm\" (UniqueName: \"kubernetes.io/projected/5b9c3bd5-587a-40cb-b489-764fd5f98ca0-kube-api-access-fnswm\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.203032 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4kx8\" (UniqueName: \"kubernetes.io/projected/22942b26-7d2f-4a77-9d97-b7bd457dcfe7-kube-api-access-g4kx8\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.203310 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d29dfd27-459d-4ade-8119-3c84095d0b1b-logs" (OuterVolumeSpecName: "logs") pod "d29dfd27-459d-4ade-8119-3c84095d0b1b" (UID: "d29dfd27-459d-4ade-8119-3c84095d0b1b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.203832 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe87e12e-e732-4a38-b9bc-0e6000da9bd8" (UID: "fe87e12e-e732-4a38-b9bc-0e6000da9bd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.221474 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d29dfd27-459d-4ade-8119-3c84095d0b1b" (UID: "d29dfd27-459d-4ade-8119-3c84095d0b1b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.221499 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-kube-api-access-w76kg" (OuterVolumeSpecName: "kube-api-access-w76kg") pod "fe87e12e-e732-4a38-b9bc-0e6000da9bd8" (UID: "fe87e12e-e732-4a38-b9bc-0e6000da9bd8"). InnerVolumeSpecName "kube-api-access-w76kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.221635 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d29dfd27-459d-4ade-8119-3c84095d0b1b-kube-api-access-b6qhh" (OuterVolumeSpecName: "kube-api-access-b6qhh") pod "d29dfd27-459d-4ade-8119-3c84095d0b1b" (UID: "d29dfd27-459d-4ade-8119-3c84095d0b1b"). InnerVolumeSpecName "kube-api-access-b6qhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.223394 4889 scope.go:117] "RemoveContainer" containerID="567f961e244cb59c92bd5c9c282ae20876453ed39721849a5bc4edf9bc1b69a8" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.229540 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.232503 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.239506 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.246374 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d29dfd27-459d-4ade-8119-3c84095d0b1b" (UID: "d29dfd27-459d-4ade-8119-3c84095d0b1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.247796 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fd84fdbd8-ztpds"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.257326 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.264478 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7fd84fdbd8-ztpds"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.264788 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.272195 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.275785 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.284239 4889 scope.go:117] "RemoveContainer" containerID="e42c6a2fac386f68867d9c6f7a7a339fe2bac4979ffa2c5787f9e179f30a3979" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.284519 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.294280 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.301535 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.304593 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8dz7\" (UniqueName: \"kubernetes.io/projected/741842f5-b565-43c8-bd99-eb15782fcf18-kube-api-access-m8dz7\") pod \"741842f5-b565-43c8-bd99-eb15782fcf18\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.304687 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data\") pod \"741842f5-b565-43c8-bd99-eb15782fcf18\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.304768 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-combined-ca-bundle\") pod \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.304814 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-operator-scripts\") pod \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.304839 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-galera-tls-certs\") pod \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.304872 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxss8\" (UniqueName: \"kubernetes.io/projected/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kube-api-access-sxss8\") pod \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.304896 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-default\") pod \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.304931 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-generated\") pod \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305520 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-combined-ca-bundle\") pod \"741842f5-b565-43c8-bd99-eb15782fcf18\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305570 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kolla-config\") pod \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305620 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741842f5-b565-43c8-bd99-eb15782fcf18-logs\") pod \"741842f5-b565-43c8-bd99-eb15782fcf18\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305693 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data-custom\") pod \"741842f5-b565-43c8-bd99-eb15782fcf18\" (UID: \"741842f5-b565-43c8-bd99-eb15782fcf18\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305806 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305838 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-combined-ca-bundle\") pod \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\" (UID: \"b4be180d-c2ba-47ad-964d-18e7b1c12b2b\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305885 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00c7d31d-27e7-45cc-abb6-bae21de9135f-operator-scripts\") pod \"00c7d31d-27e7-45cc-abb6-bae21de9135f\" (UID: \"00c7d31d-27e7-45cc-abb6-bae21de9135f\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305916 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-config-data\") pod \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305944 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhxsm\" (UniqueName: \"kubernetes.io/projected/bf4ff6f2-105e-4f62-be58-3054d0a54fed-kube-api-access-xhxsm\") pod \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\" (UID: \"bf4ff6f2-105e-4f62-be58-3054d0a54fed\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.305965 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9m99\" (UniqueName: \"kubernetes.io/projected/00c7d31d-27e7-45cc-abb6-bae21de9135f-kube-api-access-h9m99\") pod \"00c7d31d-27e7-45cc-abb6-bae21de9135f\" (UID: \"00c7d31d-27e7-45cc-abb6-bae21de9135f\") " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.306019 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4be180d-c2ba-47ad-964d-18e7b1c12b2b" (UID: "b4be180d-c2ba-47ad-964d-18e7b1c12b2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307009 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "b4be180d-c2ba-47ad-964d-18e7b1c12b2b" (UID: "b4be180d-c2ba-47ad-964d-18e7b1c12b2b"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307082 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c7d31d-27e7-45cc-abb6-bae21de9135f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00c7d31d-27e7-45cc-abb6-bae21de9135f" (UID: "00c7d31d-27e7-45cc-abb6-bae21de9135f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307530 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307557 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6qhh\" (UniqueName: \"kubernetes.io/projected/d29dfd27-459d-4ade-8119-3c84095d0b1b-kube-api-access-b6qhh\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307574 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w76kg\" (UniqueName: \"kubernetes.io/projected/fe87e12e-e732-4a38-b9bc-0e6000da9bd8-kube-api-access-w76kg\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307585 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d29dfd27-459d-4ade-8119-3c84095d0b1b-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307597 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307608 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307619 4889 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00c7d31d-27e7-45cc-abb6-bae21de9135f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307634 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.307646 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: E1128 07:11:42.307721 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:42 crc kubenswrapper[4889]: E1128 07:11:42.307772 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data podName:9b744978-786e-4ab0-8a5c-1e8e3f9a2809 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:50.307751852 +0000 UTC m=+1433.277986007 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data") pod "rabbitmq-cell1-server-0" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.311776 4889 scope.go:117] "RemoveContainer" containerID="0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.311892 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b4be180d-c2ba-47ad-964d-18e7b1c12b2b" (UID: "b4be180d-c2ba-47ad-964d-18e7b1c12b2b"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.312236 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741842f5-b565-43c8-bd99-eb15782fcf18-logs" (OuterVolumeSpecName: "logs") pod "741842f5-b565-43c8-bd99-eb15782fcf18" (UID: "741842f5-b565-43c8-bd99-eb15782fcf18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.312513 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "b4be180d-c2ba-47ad-964d-18e7b1c12b2b" (UID: "b4be180d-c2ba-47ad-964d-18e7b1c12b2b"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.312577 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.320076 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "741842f5-b565-43c8-bd99-eb15782fcf18" (UID: "741842f5-b565-43c8-bd99-eb15782fcf18"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.320174 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4ff6f2-105e-4f62-be58-3054d0a54fed-kube-api-access-xhxsm" (OuterVolumeSpecName: "kube-api-access-xhxsm") pod "bf4ff6f2-105e-4f62-be58-3054d0a54fed" (UID: "bf4ff6f2-105e-4f62-be58-3054d0a54fed"). InnerVolumeSpecName "kube-api-access-xhxsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.320586 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c7d31d-27e7-45cc-abb6-bae21de9135f-kube-api-access-h9m99" (OuterVolumeSpecName: "kube-api-access-h9m99") pod "00c7d31d-27e7-45cc-abb6-bae21de9135f" (UID: "00c7d31d-27e7-45cc-abb6-bae21de9135f"). InnerVolumeSpecName "kube-api-access-h9m99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.324118 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741842f5-b565-43c8-bd99-eb15782fcf18-kube-api-access-m8dz7" (OuterVolumeSpecName: "kube-api-access-m8dz7") pod "741842f5-b565-43c8-bd99-eb15782fcf18" (UID: "741842f5-b565-43c8-bd99-eb15782fcf18"). InnerVolumeSpecName "kube-api-access-m8dz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.347091 4889 scope.go:117] "RemoveContainer" containerID="0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.350995 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data" (OuterVolumeSpecName: "config-data") pod "d29dfd27-459d-4ade-8119-3c84095d0b1b" (UID: "d29dfd27-459d-4ade-8119-3c84095d0b1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.351443 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kube-api-access-sxss8" (OuterVolumeSpecName: "kube-api-access-sxss8") pod "b4be180d-c2ba-47ad-964d-18e7b1c12b2b" (UID: "b4be180d-c2ba-47ad-964d-18e7b1c12b2b"). InnerVolumeSpecName "kube-api-access-sxss8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: E1128 07:11:42.360277 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10\": container with ID starting with 0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10 not found: ID does not exist" containerID="0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.360364 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10"} err="failed to get container status \"0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10\": rpc error: code = NotFound desc = could not find container \"0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10\": container with ID starting with 0216856355af8616c5b63fdc37d18e76cfab1d8ef1a0fccb621523a8a32def10 not found: ID does not exist" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.360392 4889 scope.go:117] "RemoveContainer" containerID="e680db750829bfe235068d372b958d1768e839b09f9e0ae52648fe5055964fda" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.362797 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.369206 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.375406 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4be180d-c2ba-47ad-964d-18e7b1c12b2b" (UID: "b4be180d-c2ba-47ad-964d-18e7b1c12b2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.383762 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "b4be180d-c2ba-47ad-964d-18e7b1c12b2b" (UID: "b4be180d-c2ba-47ad-964d-18e7b1c12b2b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.385610 4889 scope.go:117] "RemoveContainer" containerID="411c51ac4022ce773c6ca107021fdf0aa7e87825c86f41edfb9eef55abeb15ae" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.397558 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf4ff6f2-105e-4f62-be58-3054d0a54fed" (UID: "bf4ff6f2-105e-4f62-be58-3054d0a54fed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.406569 4889 scope.go:117] "RemoveContainer" containerID="49402cf027d11e8e350b29757338f93c7461291da6a3603e125d8fc9821c3652" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.406791 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "b4be180d-c2ba-47ad-964d-18e7b1c12b2b" (UID: "b4be180d-c2ba-47ad-964d-18e7b1c12b2b"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.407986 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-config-data" (OuterVolumeSpecName: "config-data") pod "bf4ff6f2-105e-4f62-be58-3054d0a54fed" (UID: "bf4ff6f2-105e-4f62-be58-3054d0a54fed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.408921 4889 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741842f5-b565-43c8-bd99-eb15782fcf18-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.408950 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d29dfd27-459d-4ade-8119-3c84095d0b1b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.408960 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.408980 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.408989 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.408997 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.409006 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhxsm\" (UniqueName: \"kubernetes.io/projected/bf4ff6f2-105e-4f62-be58-3054d0a54fed-kube-api-access-xhxsm\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.409016 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9m99\" (UniqueName: \"kubernetes.io/projected/00c7d31d-27e7-45cc-abb6-bae21de9135f-kube-api-access-h9m99\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.409024 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8dz7\" (UniqueName: \"kubernetes.io/projected/741842f5-b565-43c8-bd99-eb15782fcf18-kube-api-access-m8dz7\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.409032 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4ff6f2-105e-4f62-be58-3054d0a54fed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.409051 4889 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.409059 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxss8\" (UniqueName: \"kubernetes.io/projected/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kube-api-access-sxss8\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.409068 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.409077 4889 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4be180d-c2ba-47ad-964d-18e7b1c12b2b-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.429560 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.429899 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data" (OuterVolumeSpecName: "config-data") pod "741842f5-b565-43c8-bd99-eb15782fcf18" (UID: "741842f5-b565-43c8-bd99-eb15782fcf18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.433126 4889 scope.go:117] "RemoveContainer" containerID="409c5ef01d2ff33efa004111267e8e87bbe31d48936823d35c2588b49a2b67eb" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.434669 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "741842f5-b565-43c8-bd99-eb15782fcf18" (UID: "741842f5-b565-43c8-bd99-eb15782fcf18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.510480 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.510506 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.510515 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741842f5-b565-43c8-bd99-eb15782fcf18-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:42 crc kubenswrapper[4889]: E1128 07:11:42.513774 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:11:42 crc kubenswrapper[4889]: E1128 07:11:42.521809 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:11:42 crc kubenswrapper[4889]: E1128 07:11:42.523405 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:11:42 crc kubenswrapper[4889]: E1128 07:11:42.523438 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="ovn-northd" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.880850 4889 generic.go:334] "Generic (PLEG): container finished" podID="07dfa6e3-4c33-403d-96c6-819c44224466" containerID="6a06f1ca551a6cfc2a03c4624310248aaa2f03752d3fc88f4cfb44ec7049ede3" exitCode=0 Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.880938 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55c8d644db-cqxsn" event={"ID":"07dfa6e3-4c33-403d-96c6-819c44224466","Type":"ContainerDied","Data":"6a06f1ca551a6cfc2a03c4624310248aaa2f03752d3fc88f4cfb44ec7049ede3"} Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.884390 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_972b231d-adb2-4355-ae5b-57fc0cc642f4/ovn-northd/0.log" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.884420 4889 generic.go:334] "Generic (PLEG): container finished" podID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerID="a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" exitCode=139 Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.884475 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"972b231d-adb2-4355-ae5b-57fc0cc642f4","Type":"ContainerDied","Data":"a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833"} Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.896780 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dcb6998f-sb4k2" event={"ID":"741842f5-b565-43c8-bd99-eb15782fcf18","Type":"ContainerDied","Data":"4a49f113408cfaf534228d55a531c628341bd1c8e1ff7b97aaccabb75131ebae"} Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.896844 4889 scope.go:117] "RemoveContainer" containerID="ebd8b75f47303d72ac1c1453cd80c63707ba3cde640979a9031b535215433325" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.896976 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59dcb6998f-sb4k2" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.902277 4889 generic.go:334] "Generic (PLEG): container finished" podID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" containerID="55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f" exitCode=0 Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.902348 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b4be180d-c2ba-47ad-964d-18e7b1c12b2b","Type":"ContainerDied","Data":"55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f"} Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.902365 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.902371 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b4be180d-c2ba-47ad-964d-18e7b1c12b2b","Type":"ContainerDied","Data":"36378c077ac6da636e5fca8eba4ba7a3b05d4dda956e4f47e0a16f7cf47f60c7"} Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.919757 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bf4ff6f2-105e-4f62-be58-3054d0a54fed","Type":"ContainerDied","Data":"821eeeef45fe1a309d05a04978883f02892218727343f733ec9946f42d7cd928"} Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.920040 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.922484 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican9d4e-account-delete-w2cq4" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.922733 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell03d10-account-delete-vhnfs" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.928501 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.930044 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-855dc646d8-klfjs" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.930641 4889 scope.go:117] "RemoveContainer" containerID="8961c6c6cb72aa100a7094f71ba9f1994c37f8a3f3c96b49d31139ba2ab2efea" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.930740 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi670d-account-delete-q5q9k" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.930793 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron08e6-account-delete-rzzxh" Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.963546 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.967189 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.977903 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-59dcb6998f-sb4k2"] Nov 28 07:11:42 crc kubenswrapper[4889]: I1128 07:11:42.983603 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-59dcb6998f-sb4k2"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.026916 4889 scope.go:117] "RemoveContainer" containerID="55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.029470 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_972b231d-adb2-4355-ae5b-57fc0cc642f4/ovn-northd/0.log" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.029523 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.101813 4889 scope.go:117] "RemoveContainer" containerID="01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.105359 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-855dc646d8-klfjs"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.121547 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-config\") pod \"972b231d-adb2-4355-ae5b-57fc0cc642f4\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.121616 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-northd-tls-certs\") pod \"972b231d-adb2-4355-ae5b-57fc0cc642f4\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.121659 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-combined-ca-bundle\") pod \"972b231d-adb2-4355-ae5b-57fc0cc642f4\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.121770 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2826t\" (UniqueName: \"kubernetes.io/projected/972b231d-adb2-4355-ae5b-57fc0cc642f4-kube-api-access-2826t\") pod \"972b231d-adb2-4355-ae5b-57fc0cc642f4\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.121804 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-metrics-certs-tls-certs\") pod \"972b231d-adb2-4355-ae5b-57fc0cc642f4\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.121861 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-scripts\") pod \"972b231d-adb2-4355-ae5b-57fc0cc642f4\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.121927 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-rundir\") pod \"972b231d-adb2-4355-ae5b-57fc0cc642f4\" (UID: \"972b231d-adb2-4355-ae5b-57fc0cc642f4\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.123754 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "972b231d-adb2-4355-ae5b-57fc0cc642f4" (UID: "972b231d-adb2-4355-ae5b-57fc0cc642f4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.128218 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972b231d-adb2-4355-ae5b-57fc0cc642f4-kube-api-access-2826t" (OuterVolumeSpecName: "kube-api-access-2826t") pod "972b231d-adb2-4355-ae5b-57fc0cc642f4" (UID: "972b231d-adb2-4355-ae5b-57fc0cc642f4"). InnerVolumeSpecName "kube-api-access-2826t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.135368 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-config" (OuterVolumeSpecName: "config") pod "972b231d-adb2-4355-ae5b-57fc0cc642f4" (UID: "972b231d-adb2-4355-ae5b-57fc0cc642f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.136128 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-scripts" (OuterVolumeSpecName: "scripts") pod "972b231d-adb2-4355-ae5b-57fc0cc642f4" (UID: "972b231d-adb2-4355-ae5b-57fc0cc642f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.136179 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-855dc646d8-klfjs"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.142985 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.148791 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.156303 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell03d10-account-delete-vhnfs"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.162435 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell03d10-account-delete-vhnfs"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.167867 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "972b231d-adb2-4355-ae5b-57fc0cc642f4" (UID: "972b231d-adb2-4355-ae5b-57fc0cc642f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.170838 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican9d4e-account-delete-w2cq4"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.176557 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican9d4e-account-delete-w2cq4"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.181425 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.185741 4889 scope.go:117] "RemoveContainer" containerID="55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f" Nov 28 07:11:43 crc kubenswrapper[4889]: E1128 07:11:43.186202 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f\": container with ID starting with 55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f not found: ID does not exist" containerID="55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.186258 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f"} err="failed to get container status \"55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f\": rpc error: code = NotFound desc = could not find container \"55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f\": container with ID starting with 55e237025fc7ff4fd8bbdf9f30e4b4e8bc077d3a319acf121eab2f36ae4ead0f not found: ID does not exist" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.186287 4889 scope.go:117] "RemoveContainer" containerID="01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed" Nov 28 07:11:43 crc kubenswrapper[4889]: E1128 07:11:43.186604 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed\": container with ID starting with 01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed not found: ID does not exist" containerID="01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.186636 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed"} err="failed to get container status \"01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed\": rpc error: code = NotFound desc = could not find container \"01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed\": container with ID starting with 01723b63271c5073740ad7c89a32f35a090e293ea3d47ffead2328d163a294ed not found: ID does not exist" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.186657 4889 scope.go:117] "RemoveContainer" containerID="cb72c62f8cc63262a8e708afde0ce2707f137cbcd34fac8af65e4f38de5d9324" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.187195 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.192199 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron08e6-account-delete-rzzxh"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.196949 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron08e6-account-delete-rzzxh"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.200436 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.202438 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi670d-account-delete-q5q9k"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.207456 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi670d-account-delete-q5q9k"] Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.223880 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="d578f2c7-2fee-4032-b63e-0dc8e5d1371f" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.192:6080/vnc_lite.html\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.227272 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.227309 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.227324 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972b231d-adb2-4355-ae5b-57fc0cc642f4-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.227337 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.227348 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2826t\" (UniqueName: \"kubernetes.io/projected/972b231d-adb2-4355-ae5b-57fc0cc642f4-kube-api-access-2826t\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: E1128 07:11:43.227425 4889 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:11:43 crc kubenswrapper[4889]: E1128 07:11:43.227477 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data podName:90d501b3-ad2c-4fb8-814d-411dc2a11f20 nodeName:}" failed. No retries permitted until 2025-11-28 07:11:51.227460419 +0000 UTC m=+1434.197694574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data") pod "rabbitmq-server-0" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20") : configmap "rabbitmq-config-data" not found Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.229853 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "972b231d-adb2-4355-ae5b-57fc0cc642f4" (UID: "972b231d-adb2-4355-ae5b-57fc0cc642f4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.234108 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "972b231d-adb2-4355-ae5b-57fc0cc642f4" (UID: "972b231d-adb2-4355-ae5b-57fc0cc642f4"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.328855 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-public-tls-certs\") pod \"07dfa6e3-4c33-403d-96c6-819c44224466\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.329681 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-fernet-keys\") pod \"07dfa6e3-4c33-403d-96c6-819c44224466\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.329737 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzhhs\" (UniqueName: \"kubernetes.io/projected/07dfa6e3-4c33-403d-96c6-819c44224466-kube-api-access-dzhhs\") pod \"07dfa6e3-4c33-403d-96c6-819c44224466\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.329769 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-config-data\") pod \"07dfa6e3-4c33-403d-96c6-819c44224466\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.329823 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-scripts\") pod \"07dfa6e3-4c33-403d-96c6-819c44224466\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.329857 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-credential-keys\") pod \"07dfa6e3-4c33-403d-96c6-819c44224466\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.330059 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-combined-ca-bundle\") pod \"07dfa6e3-4c33-403d-96c6-819c44224466\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.330127 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-internal-tls-certs\") pod \"07dfa6e3-4c33-403d-96c6-819c44224466\" (UID: \"07dfa6e3-4c33-403d-96c6-819c44224466\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.330643 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.330661 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b231d-adb2-4355-ae5b-57fc0cc642f4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.338028 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "07dfa6e3-4c33-403d-96c6-819c44224466" (UID: "07dfa6e3-4c33-403d-96c6-819c44224466"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.338622 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07dfa6e3-4c33-403d-96c6-819c44224466-kube-api-access-dzhhs" (OuterVolumeSpecName: "kube-api-access-dzhhs") pod "07dfa6e3-4c33-403d-96c6-819c44224466" (UID: "07dfa6e3-4c33-403d-96c6-819c44224466"). InnerVolumeSpecName "kube-api-access-dzhhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.340863 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "07dfa6e3-4c33-403d-96c6-819c44224466" (UID: "07dfa6e3-4c33-403d-96c6-819c44224466"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.341128 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-scripts" (OuterVolumeSpecName: "scripts") pod "07dfa6e3-4c33-403d-96c6-819c44224466" (UID: "07dfa6e3-4c33-403d-96c6-819c44224466"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.346535 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c7d31d-27e7-45cc-abb6-bae21de9135f" path="/var/lib/kubelet/pods/00c7d31d-27e7-45cc-abb6-bae21de9135f/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.347094 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" path="/var/lib/kubelet/pods/010c335b-59f4-4016-976b-ac71eaf5d14f/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.347637 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca42308-451d-48e1-a74f-2c7ce6c6a53a" path="/var/lib/kubelet/pods/0ca42308-451d-48e1-a74f-2c7ce6c6a53a/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.348566 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22942b26-7d2f-4a77-9d97-b7bd457dcfe7" path="/var/lib/kubelet/pods/22942b26-7d2f-4a77-9d97-b7bd457dcfe7/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.349169 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" path="/var/lib/kubelet/pods/30ed215c-b8d0-43fb-85bd-8531e5acf609/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.349888 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d7e485-1911-4206-bf42-9a57a855a880" path="/var/lib/kubelet/pods/32d7e485-1911-4206-bf42-9a57a855a880/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.350752 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d" path="/var/lib/kubelet/pods/4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.351250 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5276ecd4-549a-4a41-94be-6408535b2492" path="/var/lib/kubelet/pods/5276ecd4-549a-4a41-94be-6408535b2492/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.351754 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9c3bd5-587a-40cb-b489-764fd5f98ca0" path="/var/lib/kubelet/pods/5b9c3bd5-587a-40cb-b489-764fd5f98ca0/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.352820 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" path="/var/lib/kubelet/pods/660e4f27-4ee4-43d9-b155-7132c78e9a21/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.354114 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" path="/var/lib/kubelet/pods/741842f5-b565-43c8-bd99-eb15782fcf18/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.359130 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da3d6a4-5874-4305-b358-9765720b68f9" path="/var/lib/kubelet/pods/8da3d6a4-5874-4305-b358-9765720b68f9/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.359939 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" path="/var/lib/kubelet/pods/b4be180d-c2ba-47ad-964d-18e7b1c12b2b/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.360939 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" path="/var/lib/kubelet/pods/bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.362036 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4ff6f2-105e-4f62-be58-3054d0a54fed" path="/var/lib/kubelet/pods/bf4ff6f2-105e-4f62-be58-3054d0a54fed/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.362244 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-config-data" (OuterVolumeSpecName: "config-data") pod "07dfa6e3-4c33-403d-96c6-819c44224466" (UID: "07dfa6e3-4c33-403d-96c6-819c44224466"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.363329 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" path="/var/lib/kubelet/pods/c41bad87-7181-45c9-ad09-bf49b278416d/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.366575 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" path="/var/lib/kubelet/pods/c56d3b5d-d634-47f9-b252-1437066f06e8/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.367181 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" path="/var/lib/kubelet/pods/c7209dbe-be81-47dd-9255-c2444debdaa9/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.367311 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07dfa6e3-4c33-403d-96c6-819c44224466" (UID: "07dfa6e3-4c33-403d-96c6-819c44224466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.368549 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" path="/var/lib/kubelet/pods/d29dfd27-459d-4ade-8119-3c84095d0b1b/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.369133 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07c52ed-8e06-4dc1-8400-09a9dba35926" path="/var/lib/kubelet/pods/f07c52ed-8e06-4dc1-8400-09a9dba35926/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.369590 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9aacedc-5e53-4c26-8ded-2af578a7de41" path="/var/lib/kubelet/pods/f9aacedc-5e53-4c26-8ded-2af578a7de41/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.370502 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe87e12e-e732-4a38-b9bc-0e6000da9bd8" path="/var/lib/kubelet/pods/fe87e12e-e732-4a38-b9bc-0e6000da9bd8/volumes" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.381993 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07dfa6e3-4c33-403d-96c6-819c44224466" (UID: "07dfa6e3-4c33-403d-96c6-819c44224466"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.412906 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "07dfa6e3-4c33-403d-96c6-819c44224466" (UID: "07dfa6e3-4c33-403d-96c6-819c44224466"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.432596 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.432630 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.432639 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.432647 4889 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.432686 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzhhs\" (UniqueName: \"kubernetes.io/projected/07dfa6e3-4c33-403d-96c6-819c44224466-kube-api-access-dzhhs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.432698 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.432718 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.432728 4889 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07dfa6e3-4c33-403d-96c6-819c44224466-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.482458 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.534840 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-plugins-conf\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.534938 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-confd\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.534974 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-tls\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535024 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-server-conf\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535056 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-erlang-cookie\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535088 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-plugins\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535158 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535181 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-pod-info\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535210 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535245 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-erlang-cookie-secret\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535274 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml62b\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-kube-api-access-ml62b\") pod \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\" (UID: \"9b744978-786e-4ab0-8a5c-1e8e3f9a2809\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535559 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.535911 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.536016 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.539273 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-pod-info" (OuterVolumeSpecName: "pod-info") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.539277 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.540362 4889 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.540391 4889 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.540407 4889 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.540423 4889 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.540441 4889 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.545369 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-kube-api-access-ml62b" (OuterVolumeSpecName: "kube-api-access-ml62b") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "kube-api-access-ml62b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.550532 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.551162 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.561377 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data" (OuterVolumeSpecName: "config-data") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.578922 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-server-conf" (OuterVolumeSpecName: "server-conf") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.623149 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9b744978-786e-4ab0-8a5c-1e8e3f9a2809" (UID: "9b744978-786e-4ab0-8a5c-1e8e3f9a2809"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.642053 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.642113 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.642125 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml62b\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-kube-api-access-ml62b\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.642137 4889 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.642146 4889 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.642154 4889 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b744978-786e-4ab0-8a5c-1e8e3f9a2809-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.657473 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.743227 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.806101 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.845662 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90d501b3-ad2c-4fb8-814d-411dc2a11f20-erlang-cookie-secret\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.845737 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90d501b3-ad2c-4fb8-814d-411dc2a11f20-pod-info\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.845822 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.845929 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-erlang-cookie\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.845954 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdsnr\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-kube-api-access-jdsnr\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.846304 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-plugins-conf\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.846338 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-server-conf\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.846361 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-plugins\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.846400 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-tls\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.846411 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.846425 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-confd\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.846561 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data\") pod \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\" (UID: \"90d501b3-ad2c-4fb8-814d-411dc2a11f20\") " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.847057 4889 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.847523 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.848859 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.852211 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/90d501b3-ad2c-4fb8-814d-411dc2a11f20-pod-info" (OuterVolumeSpecName: "pod-info") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.852757 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-kube-api-access-jdsnr" (OuterVolumeSpecName: "kube-api-access-jdsnr") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "kube-api-access-jdsnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.857089 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.868162 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.870863 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d501b3-ad2c-4fb8-814d-411dc2a11f20-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.880041 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data" (OuterVolumeSpecName: "config-data") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.898810 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-server-conf" (OuterVolumeSpecName: "server-conf") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.937034 4889 generic.go:334] "Generic (PLEG): container finished" podID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" containerID="1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a" exitCode=0 Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.937175 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.937555 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90d501b3-ad2c-4fb8-814d-411dc2a11f20","Type":"ContainerDied","Data":"1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a"} Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.937676 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"90d501b3-ad2c-4fb8-814d-411dc2a11f20","Type":"ContainerDied","Data":"40f605471f0a69da83e1e1c311fb5c96870e596936fe4dd2f45833417c3d801c"} Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.937724 4889 scope.go:117] "RemoveContainer" containerID="1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.946661 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55c8d644db-cqxsn" event={"ID":"07dfa6e3-4c33-403d-96c6-819c44224466","Type":"ContainerDied","Data":"0cf33f95b58d373300c547c33b8507f7b7ea8baddfc644160d6677460298f59e"} Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.946787 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55c8d644db-cqxsn" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.950068 4889 generic.go:334] "Generic (PLEG): container finished" podID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" containerID="c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761" exitCode=0 Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.950132 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b744978-786e-4ab0-8a5c-1e8e3f9a2809","Type":"ContainerDied","Data":"c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761"} Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.950156 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b744978-786e-4ab0-8a5c-1e8e3f9a2809","Type":"ContainerDied","Data":"a09623258db225ca42b69afb7d249e2b7bcbc3fd02bf396bea4cd9a6c00a7e4c"} Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.950244 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951037 4889 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951138 4889 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951212 4889 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951298 4889 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951431 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90d501b3-ad2c-4fb8-814d-411dc2a11f20-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951529 4889 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90d501b3-ad2c-4fb8-814d-411dc2a11f20-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951653 4889 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90d501b3-ad2c-4fb8-814d-411dc2a11f20-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951756 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.951829 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdsnr\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-kube-api-access-jdsnr\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.956104 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_972b231d-adb2-4355-ae5b-57fc0cc642f4/ovn-northd/0.log" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.956163 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"972b231d-adb2-4355-ae5b-57fc0cc642f4","Type":"ContainerDied","Data":"b0e4d685247e21423d7f0f05034aa7485b9a0d9a040e053038a63b1640c19c1c"} Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.956268 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.956511 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "90d501b3-ad2c-4fb8-814d-411dc2a11f20" (UID: "90d501b3-ad2c-4fb8-814d-411dc2a11f20"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.976662 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.977964 4889 scope.go:117] "RemoveContainer" containerID="0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f" Nov 28 07:11:43 crc kubenswrapper[4889]: I1128 07:11:43.994825 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.006422 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.015869 4889 scope.go:117] "RemoveContainer" containerID="1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a" Nov 28 07:11:44 crc kubenswrapper[4889]: E1128 07:11:44.016563 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a\": container with ID starting with 1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a not found: ID does not exist" containerID="1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.016607 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a"} err="failed to get container status \"1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a\": rpc error: code = NotFound desc = could not find container \"1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a\": container with ID starting with 1dac380b1e82241d20da7e976e9f06718b11f989ec700267bda164918b83356a not found: ID does not exist" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.016642 4889 scope.go:117] "RemoveContainer" containerID="0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f" Nov 28 07:11:44 crc kubenswrapper[4889]: E1128 07:11:44.017084 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f\": container with ID starting with 0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f not found: ID does not exist" containerID="0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.017147 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f"} err="failed to get container status \"0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f\": rpc error: code = NotFound desc = could not find container \"0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f\": container with ID starting with 0ed45b48dfd8ca8367bc4ae3ef28332b90ad6e1043dc853af5e7c1db7972918f not found: ID does not exist" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.017180 4889 scope.go:117] "RemoveContainer" containerID="6a06f1ca551a6cfc2a03c4624310248aaa2f03752d3fc88f4cfb44ec7049ede3" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.020847 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-55c8d644db-cqxsn"] Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.030779 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-55c8d644db-cqxsn"] Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.035778 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.040722 4889 scope.go:117] "RemoveContainer" containerID="c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.041677 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.053688 4889 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90d501b3-ad2c-4fb8-814d-411dc2a11f20-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.053818 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.062326 4889 scope.go:117] "RemoveContainer" containerID="278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.095080 4889 scope.go:117] "RemoveContainer" containerID="c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761" Nov 28 07:11:44 crc kubenswrapper[4889]: E1128 07:11:44.095951 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761\": container with ID starting with c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761 not found: ID does not exist" containerID="c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.096001 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761"} err="failed to get container status \"c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761\": rpc error: code = NotFound desc = could not find container \"c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761\": container with ID starting with c9ecc397d47aa2f460c2c40f7e62da1e213bf2862e3f86e19a8860708e823761 not found: ID does not exist" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.096038 4889 scope.go:117] "RemoveContainer" containerID="278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60" Nov 28 07:11:44 crc kubenswrapper[4889]: E1128 07:11:44.096493 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60\": container with ID starting with 278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60 not found: ID does not exist" containerID="278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.096531 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60"} err="failed to get container status \"278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60\": rpc error: code = NotFound desc = could not find container \"278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60\": container with ID starting with 278325dfc55d084f94b9860a78601838928f64826b9edba71aca2944aa348a60 not found: ID does not exist" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.096561 4889 scope.go:117] "RemoveContainer" containerID="501a4b31916c81c75b98f9162dc9d571bda2ac1eeda86e0c705c757893b500ab" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.120399 4889 scope.go:117] "RemoveContainer" containerID="a9fac6400facb7b96a3924305e5f4d0e363f1769f5c5bf049520bf77dd4af833" Nov 28 07:11:44 crc kubenswrapper[4889]: E1128 07:11:44.126108 4889 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 28 07:11:44 crc kubenswrapper[4889]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-28T07:11:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 28 07:11:44 crc kubenswrapper[4889]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 28 07:11:44 crc kubenswrapper[4889]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-dlfmr" message=< Nov 28 07:11:44 crc kubenswrapper[4889]: Exiting ovn-controller (1) [FAILED] Nov 28 07:11:44 crc kubenswrapper[4889]: Killing ovn-controller (1) [ OK ] Nov 28 07:11:44 crc kubenswrapper[4889]: Killing ovn-controller (1) with SIGKILL [ OK ] Nov 28 07:11:44 crc kubenswrapper[4889]: 2025-11-28T07:11:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 28 07:11:44 crc kubenswrapper[4889]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 28 07:11:44 crc kubenswrapper[4889]: > Nov 28 07:11:44 crc kubenswrapper[4889]: E1128 07:11:44.126155 4889 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 28 07:11:44 crc kubenswrapper[4889]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-28T07:11:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 28 07:11:44 crc kubenswrapper[4889]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 28 07:11:44 crc kubenswrapper[4889]: > pod="openstack/ovn-controller-dlfmr" podUID="723ca26e-f925-47cc-92e3-998ff36f3e92" containerName="ovn-controller" containerID="cri-o://bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.126226 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-dlfmr" podUID="723ca26e-f925-47cc-92e3-998ff36f3e92" containerName="ovn-controller" containerID="cri-o://bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264" gracePeriod=22 Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.300011 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.311646 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.456237 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dlfmr_723ca26e-f925-47cc-92e3-998ff36f3e92/ovn-controller/0.log" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.456329 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dlfmr" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.462421 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-ovn-controller-tls-certs\") pod \"723ca26e-f925-47cc-92e3-998ff36f3e92\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.462471 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-combined-ca-bundle\") pod \"723ca26e-f925-47cc-92e3-998ff36f3e92\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.462514 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run\") pod \"723ca26e-f925-47cc-92e3-998ff36f3e92\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.462538 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z2x7\" (UniqueName: \"kubernetes.io/projected/723ca26e-f925-47cc-92e3-998ff36f3e92-kube-api-access-8z2x7\") pod \"723ca26e-f925-47cc-92e3-998ff36f3e92\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.462553 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run-ovn\") pod \"723ca26e-f925-47cc-92e3-998ff36f3e92\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.462576 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-log-ovn\") pod \"723ca26e-f925-47cc-92e3-998ff36f3e92\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.462606 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/723ca26e-f925-47cc-92e3-998ff36f3e92-scripts\") pod \"723ca26e-f925-47cc-92e3-998ff36f3e92\" (UID: \"723ca26e-f925-47cc-92e3-998ff36f3e92\") " Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.464501 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run" (OuterVolumeSpecName: "var-run") pod "723ca26e-f925-47cc-92e3-998ff36f3e92" (UID: "723ca26e-f925-47cc-92e3-998ff36f3e92"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.464523 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "723ca26e-f925-47cc-92e3-998ff36f3e92" (UID: "723ca26e-f925-47cc-92e3-998ff36f3e92"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.464578 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "723ca26e-f925-47cc-92e3-998ff36f3e92" (UID: "723ca26e-f925-47cc-92e3-998ff36f3e92"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.466937 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/723ca26e-f925-47cc-92e3-998ff36f3e92-scripts" (OuterVolumeSpecName: "scripts") pod "723ca26e-f925-47cc-92e3-998ff36f3e92" (UID: "723ca26e-f925-47cc-92e3-998ff36f3e92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.479273 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723ca26e-f925-47cc-92e3-998ff36f3e92-kube-api-access-8z2x7" (OuterVolumeSpecName: "kube-api-access-8z2x7") pod "723ca26e-f925-47cc-92e3-998ff36f3e92" (UID: "723ca26e-f925-47cc-92e3-998ff36f3e92"). InnerVolumeSpecName "kube-api-access-8z2x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.490945 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "723ca26e-f925-47cc-92e3-998ff36f3e92" (UID: "723ca26e-f925-47cc-92e3-998ff36f3e92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.532049 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "723ca26e-f925-47cc-92e3-998ff36f3e92" (UID: "723ca26e-f925-47cc-92e3-998ff36f3e92"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.564035 4889 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.564073 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z2x7\" (UniqueName: \"kubernetes.io/projected/723ca26e-f925-47cc-92e3-998ff36f3e92-kube-api-access-8z2x7\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.564085 4889 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.564096 4889 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/723ca26e-f925-47cc-92e3-998ff36f3e92-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.564104 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/723ca26e-f925-47cc-92e3-998ff36f3e92-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.564112 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.564121 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723ca26e-f925-47cc-92e3-998ff36f3e92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.970965 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dlfmr_723ca26e-f925-47cc-92e3-998ff36f3e92/ovn-controller/0.log" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.971023 4889 generic.go:334] "Generic (PLEG): container finished" podID="723ca26e-f925-47cc-92e3-998ff36f3e92" containerID="bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264" exitCode=137 Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.971088 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dlfmr" Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.971125 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dlfmr" event={"ID":"723ca26e-f925-47cc-92e3-998ff36f3e92","Type":"ContainerDied","Data":"bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264"} Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.971288 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dlfmr" event={"ID":"723ca26e-f925-47cc-92e3-998ff36f3e92","Type":"ContainerDied","Data":"2dfd04615046dca43385fe76342cc99d482d19fabfb4717506853ac27d584148"} Nov 28 07:11:44 crc kubenswrapper[4889]: I1128 07:11:44.971372 4889 scope.go:117] "RemoveContainer" containerID="bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.003827 4889 generic.go:334] "Generic (PLEG): container finished" podID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerID="206b0078bfc628bc38b7ee44283277823901e99e7548ec8345145fcecd5a4005" exitCode=0 Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.003941 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerDied","Data":"206b0078bfc628bc38b7ee44283277823901e99e7548ec8345145fcecd5a4005"} Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.012925 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dlfmr"] Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.015012 4889 scope.go:117] "RemoveContainer" containerID="bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264" Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.015760 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264\": container with ID starting with bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264 not found: ID does not exist" containerID="bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.015826 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264"} err="failed to get container status \"bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264\": rpc error: code = NotFound desc = could not find container \"bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264\": container with ID starting with bcf916fb9121c1fa6c8cf07cd06fcdd2651fd213339ebe7d2fda40618c9cb264 not found: ID does not exist" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.016430 4889 generic.go:334] "Generic (PLEG): container finished" podID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerID="7ca5f31a155561a771625bbbaea4e69473efa075212d50535e793088359bdafe" exitCode=0 Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.016514 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c99d75dcc-cgtnj" event={"ID":"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef","Type":"ContainerDied","Data":"7ca5f31a155561a771625bbbaea4e69473efa075212d50535e793088359bdafe"} Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.019837 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dlfmr"] Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.316293 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.373023 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07dfa6e3-4c33-403d-96c6-819c44224466" path="/var/lib/kubelet/pods/07dfa6e3-4c33-403d-96c6-819c44224466/volumes" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.373749 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="723ca26e-f925-47cc-92e3-998ff36f3e92" path="/var/lib/kubelet/pods/723ca26e-f925-47cc-92e3-998ff36f3e92/volumes" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.374580 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" path="/var/lib/kubelet/pods/90d501b3-ad2c-4fb8-814d-411dc2a11f20/volumes" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.375890 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" path="/var/lib/kubelet/pods/972b231d-adb2-4355-ae5b-57fc0cc642f4/volumes" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.376765 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" path="/var/lib/kubelet/pods/9b744978-786e-4ab0-8a5c-1e8e3f9a2809/volumes" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.379492 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-combined-ca-bundle\") pod \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.386589 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.468780 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "511987a9-2a20-4fe8-9f21-ebc0f6b171cf" (UID: "511987a9-2a20-4fe8-9f21-ebc0f6b171cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481437 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-sg-core-conf-yaml\") pod \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481487 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-config\") pod \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481544 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-internal-tls-certs\") pod \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481587 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-ceilometer-tls-certs\") pod \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481632 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lksrx\" (UniqueName: \"kubernetes.io/projected/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-kube-api-access-lksrx\") pod \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481652 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-run-httpd\") pod \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481676 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-combined-ca-bundle\") pod \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481721 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6qr4\" (UniqueName: \"kubernetes.io/projected/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-kube-api-access-r6qr4\") pod \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481742 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-scripts\") pod \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481762 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-ovndb-tls-certs\") pod \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481797 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-config-data\") pod \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.481823 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-log-httpd\") pod \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\" (UID: \"511987a9-2a20-4fe8-9f21-ebc0f6b171cf\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.482442 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "511987a9-2a20-4fe8-9f21-ebc0f6b171cf" (UID: "511987a9-2a20-4fe8-9f21-ebc0f6b171cf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.482478 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.482897 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "511987a9-2a20-4fe8-9f21-ebc0f6b171cf" (UID: "511987a9-2a20-4fe8-9f21-ebc0f6b171cf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.485194 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-kube-api-access-lksrx" (OuterVolumeSpecName: "kube-api-access-lksrx") pod "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" (UID: "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef"). InnerVolumeSpecName "kube-api-access-lksrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.485929 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-scripts" (OuterVolumeSpecName: "scripts") pod "511987a9-2a20-4fe8-9f21-ebc0f6b171cf" (UID: "511987a9-2a20-4fe8-9f21-ebc0f6b171cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.486064 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-kube-api-access-r6qr4" (OuterVolumeSpecName: "kube-api-access-r6qr4") pod "511987a9-2a20-4fe8-9f21-ebc0f6b171cf" (UID: "511987a9-2a20-4fe8-9f21-ebc0f6b171cf"). InnerVolumeSpecName "kube-api-access-r6qr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.499644 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "511987a9-2a20-4fe8-9f21-ebc0f6b171cf" (UID: "511987a9-2a20-4fe8-9f21-ebc0f6b171cf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.516465 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" (UID: "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.518833 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "511987a9-2a20-4fe8-9f21-ebc0f6b171cf" (UID: "511987a9-2a20-4fe8-9f21-ebc0f6b171cf"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.521364 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" (UID: "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.524062 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-config" (OuterVolumeSpecName: "config") pod "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" (UID: "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.531914 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" (UID: "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.551632 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-config-data" (OuterVolumeSpecName: "config-data") pod "511987a9-2a20-4fe8-9f21-ebc0f6b171cf" (UID: "511987a9-2a20-4fe8-9f21-ebc0f6b171cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.579098 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.579547 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.580121 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.580223 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583397 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-public-tls-certs\") pod \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583545 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-httpd-config\") pod \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\" (UID: \"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef\") " Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583856 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lksrx\" (UniqueName: \"kubernetes.io/projected/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-kube-api-access-lksrx\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583875 4889 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583889 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583898 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6qr4\" (UniqueName: \"kubernetes.io/projected/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-kube-api-access-r6qr4\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583906 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583914 4889 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583922 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583930 4889 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583938 4889 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583946 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583955 4889 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.583963 4889 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/511987a9-2a20-4fe8-9f21-ebc0f6b171cf-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.584035 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.585261 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.586606 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:45 crc kubenswrapper[4889]: E1128 07:11:45.586773 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.587202 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" (UID: "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.622197 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" (UID: "5eeb0aa6-8c42-49d0-b4d6-8585db3558ef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.685683 4889 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:45 crc kubenswrapper[4889]: I1128 07:11:45.685731 4889 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.053147 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"511987a9-2a20-4fe8-9f21-ebc0f6b171cf","Type":"ContainerDied","Data":"f119b919f166af15d837a07f7292e13a351fe294e3d6ace6be2440be956f3a17"} Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.053191 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.053201 4889 scope.go:117] "RemoveContainer" containerID="0c2cdf84f726e62f45e47d4328d523cb24c975652d68073e00aa714625b828c0" Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.058484 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c99d75dcc-cgtnj" event={"ID":"5eeb0aa6-8c42-49d0-b4d6-8585db3558ef","Type":"ContainerDied","Data":"220fc208c1d6de01525925b2f0eca71da7504dcbb2ddbc8205fa6b81c046785d"} Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.058853 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c99d75dcc-cgtnj" Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.076346 4889 scope.go:117] "RemoveContainer" containerID="eafa471d1e83e5c4174ab8b6222ceaeab6bcb18dd8a04cfa44cdd7c9aaae7176" Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.099983 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c99d75dcc-cgtnj"] Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.106594 4889 scope.go:117] "RemoveContainer" containerID="206b0078bfc628bc38b7ee44283277823901e99e7548ec8345145fcecd5a4005" Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.109557 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c99d75dcc-cgtnj"] Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.115924 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.121775 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.126587 4889 scope.go:117] "RemoveContainer" containerID="5b791b0ee5ba22707eb669b678aaed1200bebe3fe2bc24e3b032cc3e5c25310a" Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.141325 4889 scope.go:117] "RemoveContainer" containerID="a93ee07b36d6a14a36fd7e9a347daa5368daca2560780121eb1189c052d07f4b" Nov 28 07:11:46 crc kubenswrapper[4889]: I1128 07:11:46.157058 4889 scope.go:117] "RemoveContainer" containerID="7ca5f31a155561a771625bbbaea4e69473efa075212d50535e793088359bdafe" Nov 28 07:11:47 crc kubenswrapper[4889]: I1128 07:11:47.342925 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" path="/var/lib/kubelet/pods/511987a9-2a20-4fe8-9f21-ebc0f6b171cf/volumes" Nov 28 07:11:47 crc kubenswrapper[4889]: I1128 07:11:47.344043 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" path="/var/lib/kubelet/pods/5eeb0aa6-8c42-49d0-b4d6-8585db3558ef/volumes" Nov 28 07:11:50 crc kubenswrapper[4889]: E1128 07:11:50.578415 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:50 crc kubenswrapper[4889]: E1128 07:11:50.578909 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:50 crc kubenswrapper[4889]: E1128 07:11:50.579102 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:50 crc kubenswrapper[4889]: E1128 07:11:50.579600 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:11:50 crc kubenswrapper[4889]: E1128 07:11:50.580099 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:50 crc kubenswrapper[4889]: E1128 07:11:50.600235 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:50 crc kubenswrapper[4889]: E1128 07:11:50.602429 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:50 crc kubenswrapper[4889]: E1128 07:11:50.602482 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:11:55 crc kubenswrapper[4889]: E1128 07:11:55.579131 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:55 crc kubenswrapper[4889]: E1128 07:11:55.580334 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:55 crc kubenswrapper[4889]: E1128 07:11:55.580820 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:11:55 crc kubenswrapper[4889]: E1128 07:11:55.580891 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:11:55 crc kubenswrapper[4889]: E1128 07:11:55.582534 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:55 crc kubenswrapper[4889]: E1128 07:11:55.585075 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:55 crc kubenswrapper[4889]: E1128 07:11:55.587225 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:11:55 crc kubenswrapper[4889]: E1128 07:11:55.587267 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:11:58 crc kubenswrapper[4889]: I1128 07:11:58.782894 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:11:58 crc kubenswrapper[4889]: I1128 07:11:58.783257 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:12:00 crc kubenswrapper[4889]: E1128 07:12:00.579295 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:12:00 crc kubenswrapper[4889]: E1128 07:12:00.580093 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:12:00 crc kubenswrapper[4889]: E1128 07:12:00.580668 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:12:00 crc kubenswrapper[4889]: E1128 07:12:00.580722 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:12:00 crc kubenswrapper[4889]: E1128 07:12:00.582682 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:12:00 crc kubenswrapper[4889]: E1128 07:12:00.587093 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:12:00 crc kubenswrapper[4889]: E1128 07:12:00.589392 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:12:00 crc kubenswrapper[4889]: E1128 07:12:00.589437 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:12:05 crc kubenswrapper[4889]: E1128 07:12:05.578614 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:12:05 crc kubenswrapper[4889]: E1128 07:12:05.579523 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:12:05 crc kubenswrapper[4889]: E1128 07:12:05.579791 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:12:05 crc kubenswrapper[4889]: E1128 07:12:05.579824 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:12:05 crc kubenswrapper[4889]: E1128 07:12:05.579868 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:12:05 crc kubenswrapper[4889]: E1128 07:12:05.581062 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:12:05 crc kubenswrapper[4889]: E1128 07:12:05.582238 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:12:05 crc kubenswrapper[4889]: E1128 07:12:05.582274 4889 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d2mhk" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.257695 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d2mhk_d69857d8-b0ca-49bd-9d89-3ad02ec7adea/ovs-vswitchd/0.log" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.258807 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2mhk" event={"ID":"d69857d8-b0ca-49bd-9d89-3ad02ec7adea","Type":"ContainerDied","Data":"de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb"} Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.258887 4889 generic.go:334] "Generic (PLEG): container finished" podID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" exitCode=137 Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.258968 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d2mhk" event={"ID":"d69857d8-b0ca-49bd-9d89-3ad02ec7adea","Type":"ContainerDied","Data":"1f976e7556fe2f100def71ad7f12a71c2d6fa26ca2096d5c5c38e59e17b7c44c"} Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.258992 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f976e7556fe2f100def71ad7f12a71c2d6fa26ca2096d5c5c38e59e17b7c44c" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.261134 4889 generic.go:334] "Generic (PLEG): container finished" podID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerID="49e67bb5951ea35d2e035af45fe412854503885e3be636b75ae99068b967486a" exitCode=137 Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.261177 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91eac1f-c699-4e53-9ff8-e8326bf4e185","Type":"ContainerDied","Data":"49e67bb5951ea35d2e035af45fe412854503885e3be636b75ae99068b967486a"} Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.307181 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d2mhk_d69857d8-b0ca-49bd-9d89-3ad02ec7adea/ovs-vswitchd/0.log" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.308017 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.318213 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388080 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-scripts\") pod \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388126 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-lib\") pod \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388142 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91eac1f-c699-4e53-9ff8-e8326bf4e185-etc-machine-id\") pod \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388186 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-etc-ovs\") pod \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388180 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-lib" (OuterVolumeSpecName: "var-lib") pod "d69857d8-b0ca-49bd-9d89-3ad02ec7adea" (UID: "d69857d8-b0ca-49bd-9d89-3ad02ec7adea"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388203 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-scripts\") pod \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388235 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f91eac1f-c699-4e53-9ff8-e8326bf4e185-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f91eac1f-c699-4e53-9ff8-e8326bf4e185" (UID: "f91eac1f-c699-4e53-9ff8-e8326bf4e185"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388240 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5btw\" (UniqueName: \"kubernetes.io/projected/f91eac1f-c699-4e53-9ff8-e8326bf4e185-kube-api-access-j5btw\") pod \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388244 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "d69857d8-b0ca-49bd-9d89-3ad02ec7adea" (UID: "d69857d8-b0ca-49bd-9d89-3ad02ec7adea"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388290 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5scv\" (UniqueName: \"kubernetes.io/projected/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-kube-api-access-k5scv\") pod \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388324 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data\") pod \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388349 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data-custom\") pod \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388371 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-log\") pod \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388406 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-run\") pod \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\" (UID: \"d69857d8-b0ca-49bd-9d89-3ad02ec7adea\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388433 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-combined-ca-bundle\") pod \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\" (UID: \"f91eac1f-c699-4e53-9ff8-e8326bf4e185\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388695 4889 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-lib\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388734 4889 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91eac1f-c699-4e53-9ff8-e8326bf4e185-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.388750 4889 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.389117 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-log" (OuterVolumeSpecName: "var-log") pod "d69857d8-b0ca-49bd-9d89-3ad02ec7adea" (UID: "d69857d8-b0ca-49bd-9d89-3ad02ec7adea"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.389141 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-run" (OuterVolumeSpecName: "var-run") pod "d69857d8-b0ca-49bd-9d89-3ad02ec7adea" (UID: "d69857d8-b0ca-49bd-9d89-3ad02ec7adea"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.390072 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-scripts" (OuterVolumeSpecName: "scripts") pod "d69857d8-b0ca-49bd-9d89-3ad02ec7adea" (UID: "d69857d8-b0ca-49bd-9d89-3ad02ec7adea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.393348 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f91eac1f-c699-4e53-9ff8-e8326bf4e185" (UID: "f91eac1f-c699-4e53-9ff8-e8326bf4e185"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.393587 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-scripts" (OuterVolumeSpecName: "scripts") pod "f91eac1f-c699-4e53-9ff8-e8326bf4e185" (UID: "f91eac1f-c699-4e53-9ff8-e8326bf4e185"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.393996 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-kube-api-access-k5scv" (OuterVolumeSpecName: "kube-api-access-k5scv") pod "d69857d8-b0ca-49bd-9d89-3ad02ec7adea" (UID: "d69857d8-b0ca-49bd-9d89-3ad02ec7adea"). InnerVolumeSpecName "kube-api-access-k5scv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.394915 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91eac1f-c699-4e53-9ff8-e8326bf4e185-kube-api-access-j5btw" (OuterVolumeSpecName: "kube-api-access-j5btw") pod "f91eac1f-c699-4e53-9ff8-e8326bf4e185" (UID: "f91eac1f-c699-4e53-9ff8-e8326bf4e185"). InnerVolumeSpecName "kube-api-access-j5btw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.428747 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f91eac1f-c699-4e53-9ff8-e8326bf4e185" (UID: "f91eac1f-c699-4e53-9ff8-e8326bf4e185"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.457946 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data" (OuterVolumeSpecName: "config-data") pod "f91eac1f-c699-4e53-9ff8-e8326bf4e185" (UID: "f91eac1f-c699-4e53-9ff8-e8326bf4e185"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490047 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5btw\" (UniqueName: \"kubernetes.io/projected/f91eac1f-c699-4e53-9ff8-e8326bf4e185-kube-api-access-j5btw\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490118 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5scv\" (UniqueName: \"kubernetes.io/projected/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-kube-api-access-k5scv\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490159 4889 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490174 4889 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490190 4889 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490204 4889 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490252 4889 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490271 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d69857d8-b0ca-49bd-9d89-3ad02ec7adea-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.490287 4889 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91eac1f-c699-4e53-9ff8-e8326bf4e185-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.776488 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.896389 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-lock\") pod \"637e0576-2707-4c19-82d5-837d5e39578a\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.896817 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"637e0576-2707-4c19-82d5-837d5e39578a\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.896997 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-cache\") pod \"637e0576-2707-4c19-82d5-837d5e39578a\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.897063 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qvsr\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-kube-api-access-4qvsr\") pod \"637e0576-2707-4c19-82d5-837d5e39578a\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.897099 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-lock" (OuterVolumeSpecName: "lock") pod "637e0576-2707-4c19-82d5-837d5e39578a" (UID: "637e0576-2707-4c19-82d5-837d5e39578a"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.897117 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") pod \"637e0576-2707-4c19-82d5-837d5e39578a\" (UID: \"637e0576-2707-4c19-82d5-837d5e39578a\") " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.897504 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-cache" (OuterVolumeSpecName: "cache") pod "637e0576-2707-4c19-82d5-837d5e39578a" (UID: "637e0576-2707-4c19-82d5-837d5e39578a"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.897826 4889 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-lock\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.897863 4889 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/637e0576-2707-4c19-82d5-837d5e39578a-cache\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.899881 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-kube-api-access-4qvsr" (OuterVolumeSpecName: "kube-api-access-4qvsr") pod "637e0576-2707-4c19-82d5-837d5e39578a" (UID: "637e0576-2707-4c19-82d5-837d5e39578a"). InnerVolumeSpecName "kube-api-access-4qvsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.900241 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "637e0576-2707-4c19-82d5-837d5e39578a" (UID: "637e0576-2707-4c19-82d5-837d5e39578a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.900500 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "637e0576-2707-4c19-82d5-837d5e39578a" (UID: "637e0576-2707-4c19-82d5-837d5e39578a"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.998574 4889 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.998640 4889 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 28 07:12:06 crc kubenswrapper[4889]: I1128 07:12:06.998658 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qvsr\" (UniqueName: \"kubernetes.io/projected/637e0576-2707-4c19-82d5-837d5e39578a-kube-api-access-4qvsr\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.017450 4889 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.099518 4889 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.274370 4889 generic.go:334] "Generic (PLEG): container finished" podID="637e0576-2707-4c19-82d5-837d5e39578a" containerID="cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086" exitCode=137 Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.274482 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.274449 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086"} Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.274885 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"637e0576-2707-4c19-82d5-837d5e39578a","Type":"ContainerDied","Data":"a06355f237ffd92f316d2b84f24094b54a7ab391a92cd8ca72eb909bdf71abcc"} Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.274960 4889 scope.go:117] "RemoveContainer" containerID="cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.276611 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d2mhk" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.276593 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91eac1f-c699-4e53-9ff8-e8326bf4e185","Type":"ContainerDied","Data":"a3d1fac957c68dd8ee23ae095f85116ed1ba8797060f0ece64af38a91a7a1cce"} Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.276617 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.297390 4889 scope.go:117] "RemoveContainer" containerID="54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.317033 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.323379 4889 scope.go:117] "RemoveContainer" containerID="107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.342189 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.342231 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.347759 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.351266 4889 scope.go:117] "RemoveContainer" containerID="232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.353066 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-d2mhk"] Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.358299 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-d2mhk"] Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.366569 4889 scope.go:117] "RemoveContainer" containerID="b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.383975 4889 scope.go:117] "RemoveContainer" containerID="f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.409783 4889 scope.go:117] "RemoveContainer" containerID="4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.427024 4889 scope.go:117] "RemoveContainer" containerID="d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.445551 4889 scope.go:117] "RemoveContainer" containerID="12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.464843 4889 scope.go:117] "RemoveContainer" containerID="e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.485084 4889 scope.go:117] "RemoveContainer" containerID="bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.504208 4889 scope.go:117] "RemoveContainer" containerID="c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.523615 4889 scope.go:117] "RemoveContainer" containerID="2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.544915 4889 scope.go:117] "RemoveContainer" containerID="3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.563370 4889 scope.go:117] "RemoveContainer" containerID="5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.582494 4889 scope.go:117] "RemoveContainer" containerID="cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.582890 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086\": container with ID starting with cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086 not found: ID does not exist" containerID="cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.582940 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086"} err="failed to get container status \"cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086\": rpc error: code = NotFound desc = could not find container \"cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086\": container with ID starting with cac240f97b2dc24ef11237a131f862a1e09dbd258355bd01d65ce2832fc1b086 not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.582984 4889 scope.go:117] "RemoveContainer" containerID="54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.583393 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802\": container with ID starting with 54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802 not found: ID does not exist" containerID="54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.583415 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802"} err="failed to get container status \"54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802\": rpc error: code = NotFound desc = could not find container \"54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802\": container with ID starting with 54e4b05f85e4cda2724139a369726f54bd16c56af9efff209accf9965a66b802 not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.583432 4889 scope.go:117] "RemoveContainer" containerID="107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.583651 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd\": container with ID starting with 107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd not found: ID does not exist" containerID="107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.583673 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd"} err="failed to get container status \"107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd\": rpc error: code = NotFound desc = could not find container \"107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd\": container with ID starting with 107d52dcac322989bb01b14c431be9afbc6f40a46ef85a25f07fa15e8de38dfd not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.583686 4889 scope.go:117] "RemoveContainer" containerID="232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.583924 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7\": container with ID starting with 232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7 not found: ID does not exist" containerID="232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.583941 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7"} err="failed to get container status \"232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7\": rpc error: code = NotFound desc = could not find container \"232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7\": container with ID starting with 232d4b8a05f34b68c117a7b9693e47f4cf76e8b2002344b74651bd2bdabaaea7 not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.583954 4889 scope.go:117] "RemoveContainer" containerID="b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.584282 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf\": container with ID starting with b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf not found: ID does not exist" containerID="b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.584318 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf"} err="failed to get container status \"b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf\": rpc error: code = NotFound desc = could not find container \"b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf\": container with ID starting with b0927ee45e8f625fd0d1b85935d1cc83821c2964fda3a98ccdffed0cccb38aaf not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.584343 4889 scope.go:117] "RemoveContainer" containerID="f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.584625 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b\": container with ID starting with f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b not found: ID does not exist" containerID="f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.584650 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b"} err="failed to get container status \"f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b\": rpc error: code = NotFound desc = could not find container \"f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b\": container with ID starting with f3bafdc2d6d60e8d6f6eeb10a6dfada8f23d82c06a1507a8e6fd3d792198666b not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.584664 4889 scope.go:117] "RemoveContainer" containerID="4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.584924 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8\": container with ID starting with 4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8 not found: ID does not exist" containerID="4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.584943 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8"} err="failed to get container status \"4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8\": rpc error: code = NotFound desc = could not find container \"4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8\": container with ID starting with 4e9a65490449f6bc4e95b1984bf31b1de8e2a4375c34df51f6e3fa2a266905b8 not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.584956 4889 scope.go:117] "RemoveContainer" containerID="d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.585213 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f\": container with ID starting with d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f not found: ID does not exist" containerID="d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.585270 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f"} err="failed to get container status \"d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f\": rpc error: code = NotFound desc = could not find container \"d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f\": container with ID starting with d627b46436545053b9eb1dd47c05689965b71d9810e7dd696a3f3f4ba0c68e1f not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.585307 4889 scope.go:117] "RemoveContainer" containerID="12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.585585 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3\": container with ID starting with 12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3 not found: ID does not exist" containerID="12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.585607 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3"} err="failed to get container status \"12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3\": rpc error: code = NotFound desc = could not find container \"12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3\": container with ID starting with 12ba0f43ba4ee245cea6aca4630d8eefd081edd348bc72d8d0f87799193f8fd3 not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.585622 4889 scope.go:117] "RemoveContainer" containerID="e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.585853 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680\": container with ID starting with e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680 not found: ID does not exist" containerID="e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.585884 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680"} err="failed to get container status \"e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680\": rpc error: code = NotFound desc = could not find container \"e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680\": container with ID starting with e3c1e4777e9e91afe46ba26557f9d39e5dc3f2e16986f611a36f2ef5b5681680 not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.585903 4889 scope.go:117] "RemoveContainer" containerID="bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.586114 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9\": container with ID starting with bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9 not found: ID does not exist" containerID="bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.586144 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9"} err="failed to get container status \"bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9\": rpc error: code = NotFound desc = could not find container \"bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9\": container with ID starting with bf6916a972134e8e9152f0fe6e05ac5ee1df1fc9d0d870456af0044bd7b8dee9 not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.586161 4889 scope.go:117] "RemoveContainer" containerID="c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.586441 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc\": container with ID starting with c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc not found: ID does not exist" containerID="c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.586460 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc"} err="failed to get container status \"c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc\": rpc error: code = NotFound desc = could not find container \"c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc\": container with ID starting with c24bac9be1c0a74bc5615d98946b40a616fe4e881218e95edcabcd4d583609fc not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.586473 4889 scope.go:117] "RemoveContainer" containerID="2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.586677 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e\": container with ID starting with 2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e not found: ID does not exist" containerID="2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.586722 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e"} err="failed to get container status \"2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e\": rpc error: code = NotFound desc = could not find container \"2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e\": container with ID starting with 2a3cd6854481bc6d0e5ce79e45f141b2a9ef604afda97abf1f3cb68bfb86e30e not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.586741 4889 scope.go:117] "RemoveContainer" containerID="3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.586947 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb\": container with ID starting with 3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb not found: ID does not exist" containerID="3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.586967 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb"} err="failed to get container status \"3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb\": rpc error: code = NotFound desc = could not find container \"3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb\": container with ID starting with 3d869e2ea048500ba3d20b5fa70932e05a233e62abcc73ceec9ca17b00981cdb not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.586980 4889 scope.go:117] "RemoveContainer" containerID="5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58" Nov 28 07:12:07 crc kubenswrapper[4889]: E1128 07:12:07.587210 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58\": container with ID starting with 5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58 not found: ID does not exist" containerID="5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.587238 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58"} err="failed to get container status \"5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58\": rpc error: code = NotFound desc = could not find container \"5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58\": container with ID starting with 5b41ac92b35687bd1ecbf7f295337f596b035292556e143adccf5b774e582c58 not found: ID does not exist" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.587255 4889 scope.go:117] "RemoveContainer" containerID="f75c2d3e942c4126e39bbb3c030aefb4924d2c9473c25ea74d8b3c3218308e58" Nov 28 07:12:07 crc kubenswrapper[4889]: I1128 07:12:07.603430 4889 scope.go:117] "RemoveContainer" containerID="49e67bb5951ea35d2e035af45fe412854503885e3be636b75ae99068b967486a" Nov 28 07:12:09 crc kubenswrapper[4889]: I1128 07:12:09.353206 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="637e0576-2707-4c19-82d5-837d5e39578a" path="/var/lib/kubelet/pods/637e0576-2707-4c19-82d5-837d5e39578a/volumes" Nov 28 07:12:09 crc kubenswrapper[4889]: I1128 07:12:09.357035 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" path="/var/lib/kubelet/pods/d69857d8-b0ca-49bd-9d89-3ad02ec7adea/volumes" Nov 28 07:12:09 crc kubenswrapper[4889]: I1128 07:12:09.358444 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" path="/var/lib/kubelet/pods/f91eac1f-c699-4e53-9ff8-e8326bf4e185/volumes" Nov 28 07:12:28 crc kubenswrapper[4889]: I1128 07:12:28.783039 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:12:28 crc kubenswrapper[4889]: I1128 07:12:28.783454 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:12:28 crc kubenswrapper[4889]: I1128 07:12:28.783502 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 07:12:28 crc kubenswrapper[4889]: I1128 07:12:28.784191 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59b1be213e0c3af7ecbb85479735c5e364bee7085ba772a3db6c7ee269ef019c"} pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:12:28 crc kubenswrapper[4889]: I1128 07:12:28.784235 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" containerID="cri-o://59b1be213e0c3af7ecbb85479735c5e364bee7085ba772a3db6c7ee269ef019c" gracePeriod=600 Nov 28 07:12:29 crc kubenswrapper[4889]: I1128 07:12:29.459027 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerID="59b1be213e0c3af7ecbb85479735c5e364bee7085ba772a3db6c7ee269ef019c" exitCode=0 Nov 28 07:12:29 crc kubenswrapper[4889]: I1128 07:12:29.459101 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerDied","Data":"59b1be213e0c3af7ecbb85479735c5e364bee7085ba772a3db6c7ee269ef019c"} Nov 28 07:12:29 crc kubenswrapper[4889]: I1128 07:12:29.459346 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2"} Nov 28 07:12:29 crc kubenswrapper[4889]: I1128 07:12:29.459368 4889 scope.go:117] "RemoveContainer" containerID="5b371f61ff4e58e3c8a1cc2889d70d7351a69170427032ddc9f014086d459fb3" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.882924 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6jns"] Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.883982 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server-init" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884003 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server-init" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884027 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884035 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884045 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884052 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-server" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884060 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d7e485-1911-4206-bf42-9a57a855a880" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884068 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d7e485-1911-4206-bf42-9a57a855a880" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884078 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerName="barbican-keystone-listener" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884085 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerName="barbican-keystone-listener" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884098 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884108 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884117 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerName="glance-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884124 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerName="glance-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884139 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884147 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884155 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="ovn-northd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884162 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="ovn-northd" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884173 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884180 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884195 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerName="cinder-scheduler" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884202 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerName="cinder-scheduler" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884215 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884223 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-api" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884237 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-reaper" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884246 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-reaper" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884256 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884263 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884275 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca42308-451d-48e1-a74f-2c7ce6c6a53a" containerName="nova-cell0-conductor-conductor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884282 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca42308-451d-48e1-a74f-2c7ce6c6a53a" containerName="nova-cell0-conductor-conductor" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884293 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-updater" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884301 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-updater" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884313 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884321 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884336 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="sg-core" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884344 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="sg-core" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884354 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="swift-recon-cron" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884362 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="swift-recon-cron" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884370 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-updater" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884379 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-updater" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884395 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf7fcae-8493-4333-96c4-d4692a144187" containerName="mysql-bootstrap" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884402 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf7fcae-8493-4333-96c4-d4692a144187" containerName="mysql-bootstrap" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884412 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerName="glance-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884420 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerName="glance-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884430 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="ceilometer-notification-agent" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884438 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="ceilometer-notification-agent" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884449 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerName="glance-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884457 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerName="glance-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884464 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" containerName="dnsmasq-dns" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884472 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" containerName="dnsmasq-dns" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884485 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerName="placement-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884493 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerName="placement-api" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884505 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884513 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884521 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07dfa6e3-4c33-403d-96c6-819c44224466" containerName="keystone-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884528 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="07dfa6e3-4c33-403d-96c6-819c44224466" containerName="keystone-api" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884537 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c7d31d-27e7-45cc-abb6-bae21de9135f" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884544 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c7d31d-27e7-45cc-abb6-bae21de9135f" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884557 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" containerName="setup-container" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884564 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" containerName="setup-container" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884572 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da3d6a4-5874-4305-b358-9765720b68f9" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884580 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da3d6a4-5874-4305-b358-9765720b68f9" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884589 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5deb3d-df4a-48e4-844b-35247485825a" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884597 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5deb3d-df4a-48e4-844b-35247485825a" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884609 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" containerName="rabbitmq" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884616 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" containerName="rabbitmq" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884624 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-expirer" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884632 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-expirer" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884646 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" containerName="ovsdbserver-sb" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884653 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" containerName="ovsdbserver-sb" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884664 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9c3bd5-587a-40cb-b489-764fd5f98ca0" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884672 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9c3bd5-587a-40cb-b489-764fd5f98ca0" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884680 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" containerName="galera" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884687 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" containerName="galera" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884696 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884722 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884737 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884744 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884755 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884763 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884781 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf7fcae-8493-4333-96c4-d4692a144187" containerName="galera" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884789 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf7fcae-8493-4333-96c4-d4692a144187" containerName="galera" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884801 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22942b26-7d2f-4a77-9d97-b7bd457dcfe7" containerName="nova-scheduler-scheduler" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884809 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="22942b26-7d2f-4a77-9d97-b7bd457dcfe7" containerName="nova-scheduler-scheduler" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884822 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerName="probe" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884830 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerName="probe" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884842 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="rsync" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884849 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="rsync" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884863 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" containerName="barbican-worker" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884871 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" containerName="barbican-worker" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884884 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" containerName="proxy-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884891 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" containerName="proxy-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884905 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" containerName="proxy-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884914 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" containerName="proxy-server" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884928 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07c52ed-8e06-4dc1-8400-09a9dba35926" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884936 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07c52ed-8e06-4dc1-8400-09a9dba35926" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884948 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerName="neutron-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884956 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerName="neutron-api" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884967 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" containerName="rabbitmq" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884975 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" containerName="rabbitmq" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.884989 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe87e12e-e732-4a38-b9bc-0e6000da9bd8" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.884997 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe87e12e-e732-4a38-b9bc-0e6000da9bd8" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885004 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-metadata" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885012 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-metadata" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885020 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885027 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-server" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885041 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885051 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885059 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" containerName="barbican-worker-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885067 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" containerName="barbican-worker-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885075 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="proxy-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885083 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="proxy-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885093 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885101 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885110 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" containerName="setup-container" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885117 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" containerName="setup-container" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885127 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885135 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885145 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885152 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885163 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885170 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-server" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885178 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerName="glance-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885186 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerName="glance-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885200 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885209 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885222 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92a932b-ef66-408c-883e-99412a94d0da" containerName="ovsdbserver-nb" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885231 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92a932b-ef66-408c-883e-99412a94d0da" containerName="ovsdbserver-nb" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885243 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" containerName="mysql-bootstrap" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885251 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" containerName="mysql-bootstrap" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885264 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92a932b-ef66-408c-883e-99412a94d0da" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885271 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92a932b-ef66-408c-883e-99412a94d0da" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885281 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885289 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885303 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d578f2c7-2fee-4032-b63e-0dc8e5d1371f" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885311 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d578f2c7-2fee-4032-b63e-0dc8e5d1371f" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885320 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="ceilometer-central-agent" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885327 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="ceilometer-central-agent" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885343 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885351 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885365 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerName="barbican-keystone-listener-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885373 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerName="barbican-keystone-listener-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885383 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4ff6f2-105e-4f62-be58-3054d0a54fed" containerName="nova-cell1-conductor-conductor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885391 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4ff6f2-105e-4f62-be58-3054d0a54fed" containerName="nova-cell1-conductor-conductor" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885405 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9aacedc-5e53-4c26-8ded-2af578a7de41" containerName="kube-state-metrics" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885413 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9aacedc-5e53-4c26-8ded-2af578a7de41" containerName="kube-state-metrics" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885425 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" containerName="init" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885434 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" containerName="init" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885445 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerName="placement-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885452 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerName="placement-log" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885462 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5276ecd4-549a-4a41-94be-6408535b2492" containerName="memcached" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885471 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5276ecd4-549a-4a41-94be-6408535b2492" containerName="memcached" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885478 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723ca26e-f925-47cc-92e3-998ff36f3e92" containerName="ovn-controller" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885486 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="723ca26e-f925-47cc-92e3-998ff36f3e92" containerName="ovn-controller" Nov 28 07:13:06 crc kubenswrapper[4889]: E1128 07:13:06.885496 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerName="neutron-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885503 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerName="neutron-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885786 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5276ecd4-549a-4a41-94be-6408535b2492" containerName="memcached" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885807 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerName="cinder-scheduler" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885819 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885830 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4be180d-c2ba-47ad-964d-18e7b1c12b2b" containerName="galera" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885839 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885849 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da3d6a4-5874-4305-b358-9765720b68f9" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885864 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerName="neutron-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885873 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="ceilometer-notification-agent" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885890 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92a932b-ef66-408c-883e-99412a94d0da" containerName="ovsdbserver-nb" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885897 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9aacedc-5e53-4c26-8ded-2af578a7de41" containerName="kube-state-metrics" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885911 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b744978-786e-4ab0-8a5c-1e8e3f9a2809" containerName="rabbitmq" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885923 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-reaper" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885935 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885949 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="07dfa6e3-4c33-403d-96c6-819c44224466" containerName="keystone-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885958 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d578f2c7-2fee-4032-b63e-0dc8e5d1371f" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885968 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerName="glance-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885978 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5ffa2e-0101-4c23-9a04-b6baa4a9ab9d" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.885990 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886002 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="rsync" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886010 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" containerName="ovsdbserver-sb" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886017 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91eac1f-c699-4e53-9ff8-e8326bf4e185" containerName="probe" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886026 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="ceilometer-central-agent" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886041 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="22942b26-7d2f-4a77-9d97-b7bd457dcfe7" containerName="nova-scheduler-scheduler" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886050 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c7d31d-27e7-45cc-abb6-bae21de9135f" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886062 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" containerName="proxy-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886071 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c960973-a307-4a8a-9fe6-885450c512e0" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886079 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886090 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886101 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886112 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerName="placement-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886125 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9c3bd5-587a-40cb-b489-764fd5f98ca0" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886134 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886143 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886155 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerName="glance-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886164 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92a932b-ef66-408c-883e-99412a94d0da" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886174 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5deb3d-df4a-48e4-844b-35247485825a" containerName="openstack-network-exporter" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886187 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886198 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07c52ed-8e06-4dc1-8400-09a9dba35926" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886209 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb1e21ee-7d2d-4d55-8a0e-d6235a12f0ae" containerName="glance-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886221 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4ff6f2-105e-4f62-be58-3054d0a54fed" containerName="nova-cell1-conductor-conductor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886232 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-updater" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886241 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" containerName="barbican-worker" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886253 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe87e12e-e732-4a38-b9bc-0e6000da9bd8" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886263 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="sg-core" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886277 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="660e4f27-4ee4-43d9-b155-7132c78e9a21" containerName="nova-api-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886287 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovsdb-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886299 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerName="barbican-keystone-listener" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886307 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886317 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="container-updater" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886329 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-auditor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886341 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69857d8-b0ca-49bd-9d89-3ad02ec7adea" containerName="ovs-vswitchd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886354 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ed215c-b8d0-43fb-85bd-8531e5acf609" containerName="glance-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886368 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="010c335b-59f4-4016-976b-ac71eaf5d14f" containerName="placement-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886379 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="741842f5-b565-43c8-bd99-eb15782fcf18" containerName="barbican-worker-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886387 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7209dbe-be81-47dd-9255-c2444debdaa9" containerName="cinder-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886397 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="swift-recon-cron" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886408 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="511987a9-2a20-4fe8-9f21-ebc0f6b171cf" containerName="proxy-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886433 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-expirer" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886446 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1f8a48-5ca3-46e1-8246-b8c6737b45cb" containerName="dnsmasq-dns" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886478 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca42308-451d-48e1-a74f-2c7ce6c6a53a" containerName="nova-cell0-conductor-conductor" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886488 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41bad87-7181-45c9-ad09-bf49b278416d" containerName="barbican-api" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886499 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d7e485-1911-4206-bf42-9a57a855a880" containerName="mariadb-account-delete" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886511 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eeb0aa6-8c42-49d0-b4d6-8585db3558ef" containerName="neutron-httpd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886525 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf7fcae-8493-4333-96c4-d4692a144187" containerName="galera" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886536 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56d3b5d-d634-47f9-b252-1437066f06e8" containerName="nova-metadata-metadata" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886549 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cff4827-368d-4e19-beb0-b22b71032f26" containerName="proxy-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886560 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="723ca26e-f925-47cc-92e3-998ff36f3e92" containerName="ovn-controller" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886572 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="account-server" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886581 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="972b231d-adb2-4355-ae5b-57fc0cc642f4" containerName="ovn-northd" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886593 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d501b3-ad2c-4fb8-814d-411dc2a11f20" containerName="rabbitmq" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886605 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e0576-2707-4c19-82d5-837d5e39578a" containerName="object-replicator" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.886617 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d29dfd27-459d-4ade-8119-3c84095d0b1b" containerName="barbican-keystone-listener-log" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.888029 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:06 crc kubenswrapper[4889]: I1128 07:13:06.905614 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6jns"] Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.026881 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqfw\" (UniqueName: \"kubernetes.io/projected/90e283b6-6289-46fd-80c2-ee573fb725d9-kube-api-access-6mqfw\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.027008 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-utilities\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.027035 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-catalog-content\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.128585 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mqfw\" (UniqueName: \"kubernetes.io/projected/90e283b6-6289-46fd-80c2-ee573fb725d9-kube-api-access-6mqfw\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.128961 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-utilities\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.128982 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-catalog-content\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.129458 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-catalog-content\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.129751 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-utilities\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.151613 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mqfw\" (UniqueName: \"kubernetes.io/projected/90e283b6-6289-46fd-80c2-ee573fb725d9-kube-api-access-6mqfw\") pod \"community-operators-c6jns\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.239479 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:07 crc kubenswrapper[4889]: I1128 07:13:07.718685 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6jns"] Nov 28 07:13:08 crc kubenswrapper[4889]: I1128 07:13:08.032097 4889 generic.go:334] "Generic (PLEG): container finished" podID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerID="b4a3882acac9cbdf0721946ea40908e6f1a1c03320a0121df62d2c3d80f75b89" exitCode=0 Nov 28 07:13:08 crc kubenswrapper[4889]: I1128 07:13:08.032156 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6jns" event={"ID":"90e283b6-6289-46fd-80c2-ee573fb725d9","Type":"ContainerDied","Data":"b4a3882acac9cbdf0721946ea40908e6f1a1c03320a0121df62d2c3d80f75b89"} Nov 28 07:13:08 crc kubenswrapper[4889]: I1128 07:13:08.032222 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6jns" event={"ID":"90e283b6-6289-46fd-80c2-ee573fb725d9","Type":"ContainerStarted","Data":"bfa8449da4d260497db3e72fe7353491350029ddad2d0664c3835ac02b32b8d2"} Nov 28 07:13:09 crc kubenswrapper[4889]: I1128 07:13:09.045987 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6jns" event={"ID":"90e283b6-6289-46fd-80c2-ee573fb725d9","Type":"ContainerStarted","Data":"dfa3c128221d64bf3ed8b28bbe76825e747fdbef07a6fabbd3b12698af657eb7"} Nov 28 07:13:10 crc kubenswrapper[4889]: I1128 07:13:10.056502 4889 generic.go:334] "Generic (PLEG): container finished" podID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerID="dfa3c128221d64bf3ed8b28bbe76825e747fdbef07a6fabbd3b12698af657eb7" exitCode=0 Nov 28 07:13:10 crc kubenswrapper[4889]: I1128 07:13:10.056573 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6jns" event={"ID":"90e283b6-6289-46fd-80c2-ee573fb725d9","Type":"ContainerDied","Data":"dfa3c128221d64bf3ed8b28bbe76825e747fdbef07a6fabbd3b12698af657eb7"} Nov 28 07:13:11 crc kubenswrapper[4889]: I1128 07:13:11.070693 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6jns" event={"ID":"90e283b6-6289-46fd-80c2-ee573fb725d9","Type":"ContainerStarted","Data":"1d0cb7cb418c938ec7efe2f368a4605218ca1bbc0a5c1b276bbf605a3d32ab45"} Nov 28 07:13:11 crc kubenswrapper[4889]: I1128 07:13:11.102365 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6jns" podStartSLOduration=2.66658285 podStartE2EDuration="5.10234232s" podCreationTimestamp="2025-11-28 07:13:06 +0000 UTC" firstStartedPulling="2025-11-28 07:13:08.033841447 +0000 UTC m=+1511.004075622" lastFinishedPulling="2025-11-28 07:13:10.469600937 +0000 UTC m=+1513.439835092" observedRunningTime="2025-11-28 07:13:11.099356805 +0000 UTC m=+1514.069591010" watchObservedRunningTime="2025-11-28 07:13:11.10234232 +0000 UTC m=+1514.072576495" Nov 28 07:13:13 crc kubenswrapper[4889]: I1128 07:13:13.848398 4889 scope.go:117] "RemoveContainer" containerID="c1ba8c5c6fed2b237e0d295eeb769adf091e1a2ffd35bf71a08857b2b11f23fd" Nov 28 07:13:13 crc kubenswrapper[4889]: I1128 07:13:13.885641 4889 scope.go:117] "RemoveContainer" containerID="94cc8d2bc182aab5529032c05f67b4f964516d5c3e5df53fa2edf3225847773b" Nov 28 07:13:13 crc kubenswrapper[4889]: I1128 07:13:13.905268 4889 scope.go:117] "RemoveContainer" containerID="de0a7a7446d5fb6d8c7d31fb0d4b88c97c4549baa99d0c16b07392283c660ceb" Nov 28 07:13:13 crc kubenswrapper[4889]: I1128 07:13:13.941786 4889 scope.go:117] "RemoveContainer" containerID="2daa0208004ed61abb16fbdbc99bf42d79d5859769f420aea92707a8c6cfa182" Nov 28 07:13:13 crc kubenswrapper[4889]: I1128 07:13:13.970486 4889 scope.go:117] "RemoveContainer" containerID="200e7722bf6ef29e833362691d3b0097abcbb8024dfe1437982547abdecb41e5" Nov 28 07:13:14 crc kubenswrapper[4889]: I1128 07:13:14.005411 4889 scope.go:117] "RemoveContainer" containerID="63d8e75a181bc24f5d07e30475fe1dd420fb0a22f5ad9e8334587097adfb8675" Nov 28 07:13:14 crc kubenswrapper[4889]: I1128 07:13:14.022243 4889 scope.go:117] "RemoveContainer" containerID="745e1c1cea393b7605682da0ded937a0440ca14c01a32ad3b7645010fdd7508e" Nov 28 07:13:14 crc kubenswrapper[4889]: I1128 07:13:14.048434 4889 scope.go:117] "RemoveContainer" containerID="da3ee6917396050d8ec1b497f4f1a88e07614540f1e4e0679fed1d3c1acad3ab" Nov 28 07:13:14 crc kubenswrapper[4889]: I1128 07:13:14.071971 4889 scope.go:117] "RemoveContainer" containerID="b7ac8ffea194ee1a9346ca07c093bf39841b5ab94b2cd10b7674049ead231395" Nov 28 07:13:14 crc kubenswrapper[4889]: I1128 07:13:14.087566 4889 scope.go:117] "RemoveContainer" containerID="e098daa3711e7b7e89be6cb284f74fbffdd36ea5b51ed102a043f768393d0a5f" Nov 28 07:13:14 crc kubenswrapper[4889]: I1128 07:13:14.111762 4889 scope.go:117] "RemoveContainer" containerID="3a59821bc02accc1c7655cfaca4f77c3faf9bf8e1339cbfe6dfdf26f4903500b" Nov 28 07:13:17 crc kubenswrapper[4889]: I1128 07:13:17.240161 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:17 crc kubenswrapper[4889]: I1128 07:13:17.240519 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:17 crc kubenswrapper[4889]: I1128 07:13:17.296465 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:18 crc kubenswrapper[4889]: I1128 07:13:18.177507 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:18 crc kubenswrapper[4889]: I1128 07:13:18.216188 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6jns"] Nov 28 07:13:20 crc kubenswrapper[4889]: I1128 07:13:20.159397 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c6jns" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerName="registry-server" containerID="cri-o://1d0cb7cb418c938ec7efe2f368a4605218ca1bbc0a5c1b276bbf605a3d32ab45" gracePeriod=2 Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.170656 4889 generic.go:334] "Generic (PLEG): container finished" podID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerID="1d0cb7cb418c938ec7efe2f368a4605218ca1bbc0a5c1b276bbf605a3d32ab45" exitCode=0 Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.170737 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6jns" event={"ID":"90e283b6-6289-46fd-80c2-ee573fb725d9","Type":"ContainerDied","Data":"1d0cb7cb418c938ec7efe2f368a4605218ca1bbc0a5c1b276bbf605a3d32ab45"} Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.666809 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.756943 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-utilities\") pod \"90e283b6-6289-46fd-80c2-ee573fb725d9\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.756987 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-catalog-content\") pod \"90e283b6-6289-46fd-80c2-ee573fb725d9\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.757025 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mqfw\" (UniqueName: \"kubernetes.io/projected/90e283b6-6289-46fd-80c2-ee573fb725d9-kube-api-access-6mqfw\") pod \"90e283b6-6289-46fd-80c2-ee573fb725d9\" (UID: \"90e283b6-6289-46fd-80c2-ee573fb725d9\") " Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.758059 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-utilities" (OuterVolumeSpecName: "utilities") pod "90e283b6-6289-46fd-80c2-ee573fb725d9" (UID: "90e283b6-6289-46fd-80c2-ee573fb725d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.763936 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e283b6-6289-46fd-80c2-ee573fb725d9-kube-api-access-6mqfw" (OuterVolumeSpecName: "kube-api-access-6mqfw") pod "90e283b6-6289-46fd-80c2-ee573fb725d9" (UID: "90e283b6-6289-46fd-80c2-ee573fb725d9"). InnerVolumeSpecName "kube-api-access-6mqfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.808343 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90e283b6-6289-46fd-80c2-ee573fb725d9" (UID: "90e283b6-6289-46fd-80c2-ee573fb725d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.858257 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.858293 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e283b6-6289-46fd-80c2-ee573fb725d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:13:21 crc kubenswrapper[4889]: I1128 07:13:21.858308 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mqfw\" (UniqueName: \"kubernetes.io/projected/90e283b6-6289-46fd-80c2-ee573fb725d9-kube-api-access-6mqfw\") on node \"crc\" DevicePath \"\"" Nov 28 07:13:22 crc kubenswrapper[4889]: I1128 07:13:22.180915 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6jns" event={"ID":"90e283b6-6289-46fd-80c2-ee573fb725d9","Type":"ContainerDied","Data":"bfa8449da4d260497db3e72fe7353491350029ddad2d0664c3835ac02b32b8d2"} Nov 28 07:13:22 crc kubenswrapper[4889]: I1128 07:13:22.180993 4889 scope.go:117] "RemoveContainer" containerID="1d0cb7cb418c938ec7efe2f368a4605218ca1bbc0a5c1b276bbf605a3d32ab45" Nov 28 07:13:22 crc kubenswrapper[4889]: I1128 07:13:22.180995 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6jns" Nov 28 07:13:22 crc kubenswrapper[4889]: I1128 07:13:22.201319 4889 scope.go:117] "RemoveContainer" containerID="dfa3c128221d64bf3ed8b28bbe76825e747fdbef07a6fabbd3b12698af657eb7" Nov 28 07:13:22 crc kubenswrapper[4889]: I1128 07:13:22.217642 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6jns"] Nov 28 07:13:22 crc kubenswrapper[4889]: I1128 07:13:22.223614 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c6jns"] Nov 28 07:13:22 crc kubenswrapper[4889]: I1128 07:13:22.240780 4889 scope.go:117] "RemoveContainer" containerID="b4a3882acac9cbdf0721946ea40908e6f1a1c03320a0121df62d2c3d80f75b89" Nov 28 07:13:23 crc kubenswrapper[4889]: I1128 07:13:23.340399 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" path="/var/lib/kubelet/pods/90e283b6-6289-46fd-80c2-ee573fb725d9/volumes" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.400674 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n9bwl"] Nov 28 07:13:24 crc kubenswrapper[4889]: E1128 07:13:24.401250 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerName="registry-server" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.401262 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerName="registry-server" Nov 28 07:13:24 crc kubenswrapper[4889]: E1128 07:13:24.401277 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerName="extract-utilities" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.401283 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerName="extract-utilities" Nov 28 07:13:24 crc kubenswrapper[4889]: E1128 07:13:24.401302 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerName="extract-content" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.401308 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerName="extract-content" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.401459 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e283b6-6289-46fd-80c2-ee573fb725d9" containerName="registry-server" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.402456 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.460967 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9bwl"] Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.497315 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-catalog-content\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.497375 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-utilities\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.497421 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2m7b\" (UniqueName: \"kubernetes.io/projected/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-kube-api-access-c2m7b\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.598763 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-catalog-content\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.598823 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-utilities\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.598862 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2m7b\" (UniqueName: \"kubernetes.io/projected/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-kube-api-access-c2m7b\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.599288 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-catalog-content\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.599373 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-utilities\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.617985 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2m7b\" (UniqueName: \"kubernetes.io/projected/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-kube-api-access-c2m7b\") pod \"certified-operators-n9bwl\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:24 crc kubenswrapper[4889]: I1128 07:13:24.728086 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:25 crc kubenswrapper[4889]: I1128 07:13:25.222240 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9bwl"] Nov 28 07:13:25 crc kubenswrapper[4889]: W1128 07:13:25.225083 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d97a0d9_0d4b_4d44_9ff4_15564b8fa04b.slice/crio-720280377c9ac5b4eb99f9cf678fdc5fba34be7cd9b0a34440c7f7fc97b69c52 WatchSource:0}: Error finding container 720280377c9ac5b4eb99f9cf678fdc5fba34be7cd9b0a34440c7f7fc97b69c52: Status 404 returned error can't find the container with id 720280377c9ac5b4eb99f9cf678fdc5fba34be7cd9b0a34440c7f7fc97b69c52 Nov 28 07:13:26 crc kubenswrapper[4889]: I1128 07:13:26.222025 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9bwl" event={"ID":"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b","Type":"ContainerStarted","Data":"720280377c9ac5b4eb99f9cf678fdc5fba34be7cd9b0a34440c7f7fc97b69c52"} Nov 28 07:13:27 crc kubenswrapper[4889]: I1128 07:13:27.231797 4889 generic.go:334] "Generic (PLEG): container finished" podID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerID="d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445" exitCode=0 Nov 28 07:13:27 crc kubenswrapper[4889]: I1128 07:13:27.231843 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9bwl" event={"ID":"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b","Type":"ContainerDied","Data":"d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445"} Nov 28 07:13:28 crc kubenswrapper[4889]: I1128 07:13:28.240989 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9bwl" event={"ID":"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b","Type":"ContainerStarted","Data":"3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867"} Nov 28 07:13:29 crc kubenswrapper[4889]: I1128 07:13:29.252170 4889 generic.go:334] "Generic (PLEG): container finished" podID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerID="3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867" exitCode=0 Nov 28 07:13:29 crc kubenswrapper[4889]: I1128 07:13:29.252212 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9bwl" event={"ID":"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b","Type":"ContainerDied","Data":"3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867"} Nov 28 07:13:30 crc kubenswrapper[4889]: I1128 07:13:30.262134 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9bwl" event={"ID":"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b","Type":"ContainerStarted","Data":"4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05"} Nov 28 07:13:30 crc kubenswrapper[4889]: I1128 07:13:30.286501 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n9bwl" podStartSLOduration=3.828621709 podStartE2EDuration="6.286477389s" podCreationTimestamp="2025-11-28 07:13:24 +0000 UTC" firstStartedPulling="2025-11-28 07:13:27.232901019 +0000 UTC m=+1530.203135174" lastFinishedPulling="2025-11-28 07:13:29.690756699 +0000 UTC m=+1532.660990854" observedRunningTime="2025-11-28 07:13:30.277311371 +0000 UTC m=+1533.247545526" watchObservedRunningTime="2025-11-28 07:13:30.286477389 +0000 UTC m=+1533.256711564" Nov 28 07:13:34 crc kubenswrapper[4889]: I1128 07:13:34.728703 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:34 crc kubenswrapper[4889]: I1128 07:13:34.729890 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:34 crc kubenswrapper[4889]: I1128 07:13:34.783812 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:35 crc kubenswrapper[4889]: I1128 07:13:35.364824 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:35 crc kubenswrapper[4889]: I1128 07:13:35.408630 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9bwl"] Nov 28 07:13:37 crc kubenswrapper[4889]: I1128 07:13:37.329473 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n9bwl" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerName="registry-server" containerID="cri-o://4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05" gracePeriod=2 Nov 28 07:13:37 crc kubenswrapper[4889]: I1128 07:13:37.780327 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:37 crc kubenswrapper[4889]: I1128 07:13:37.980532 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-catalog-content\") pod \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " Nov 28 07:13:37 crc kubenswrapper[4889]: I1128 07:13:37.980754 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-utilities\") pod \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " Nov 28 07:13:37 crc kubenswrapper[4889]: I1128 07:13:37.980924 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2m7b\" (UniqueName: \"kubernetes.io/projected/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-kube-api-access-c2m7b\") pod \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\" (UID: \"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b\") " Nov 28 07:13:37 crc kubenswrapper[4889]: I1128 07:13:37.981664 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-utilities" (OuterVolumeSpecName: "utilities") pod "2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" (UID: "2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:13:37 crc kubenswrapper[4889]: I1128 07:13:37.988863 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-kube-api-access-c2m7b" (OuterVolumeSpecName: "kube-api-access-c2m7b") pod "2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" (UID: "2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b"). InnerVolumeSpecName "kube-api-access-c2m7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.054820 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" (UID: "2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.081969 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2m7b\" (UniqueName: \"kubernetes.io/projected/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-kube-api-access-c2m7b\") on node \"crc\" DevicePath \"\"" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.082005 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.082023 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.339379 4889 generic.go:334] "Generic (PLEG): container finished" podID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerID="4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05" exitCode=0 Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.339433 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9bwl" event={"ID":"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b","Type":"ContainerDied","Data":"4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05"} Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.339462 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9bwl" event={"ID":"2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b","Type":"ContainerDied","Data":"720280377c9ac5b4eb99f9cf678fdc5fba34be7cd9b0a34440c7f7fc97b69c52"} Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.339480 4889 scope.go:117] "RemoveContainer" containerID="4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.339598 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9bwl" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.369271 4889 scope.go:117] "RemoveContainer" containerID="3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.373338 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9bwl"] Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.378950 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n9bwl"] Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.403429 4889 scope.go:117] "RemoveContainer" containerID="d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.419504 4889 scope.go:117] "RemoveContainer" containerID="4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05" Nov 28 07:13:38 crc kubenswrapper[4889]: E1128 07:13:38.419873 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05\": container with ID starting with 4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05 not found: ID does not exist" containerID="4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.419904 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05"} err="failed to get container status \"4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05\": rpc error: code = NotFound desc = could not find container \"4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05\": container with ID starting with 4261cccaa7f5331b1c7a71abb4f9fd7d75de612ce661796f8ed4daa56ed0ac05 not found: ID does not exist" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.419926 4889 scope.go:117] "RemoveContainer" containerID="3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867" Nov 28 07:13:38 crc kubenswrapper[4889]: E1128 07:13:38.420163 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867\": container with ID starting with 3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867 not found: ID does not exist" containerID="3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.420207 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867"} err="failed to get container status \"3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867\": rpc error: code = NotFound desc = could not find container \"3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867\": container with ID starting with 3c47ab120c12246955acb69e7cc8470e221eda98fec0c211a7c8327eccdfe867 not found: ID does not exist" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.420239 4889 scope.go:117] "RemoveContainer" containerID="d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445" Nov 28 07:13:38 crc kubenswrapper[4889]: E1128 07:13:38.420464 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445\": container with ID starting with d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445 not found: ID does not exist" containerID="d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445" Nov 28 07:13:38 crc kubenswrapper[4889]: I1128 07:13:38.420485 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445"} err="failed to get container status \"d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445\": rpc error: code = NotFound desc = could not find container \"d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445\": container with ID starting with d576e58773539460a4a47ef97039f6f6334ea67ae10bec59b7800ddd54148445 not found: ID does not exist" Nov 28 07:13:39 crc kubenswrapper[4889]: I1128 07:13:39.340016 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" path="/var/lib/kubelet/pods/2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b/volumes" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.232985 4889 scope.go:117] "RemoveContainer" containerID="41c5f3c12a42d9eb237eca5b78a8ea4b30fa7324f282831cbd489a0028d90df2" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.276727 4889 scope.go:117] "RemoveContainer" containerID="1b137e75cb748bc2c3a15eb06dd5c410700da10a7dd8199d7ded055b04a1974c" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.306257 4889 scope.go:117] "RemoveContainer" containerID="55fe50cb71e61d63b4658594aa014b55e2caac30d4a3cfbeb4b318e6d0f5877b" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.366311 4889 scope.go:117] "RemoveContainer" containerID="ca8b62caf3e2fcb8383263600f9166e50b9e8e2684b835083a8ce3701a719aa2" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.396806 4889 scope.go:117] "RemoveContainer" containerID="1e9eae91f17d3ffa4da9b7b6996803051af34401caec34a002b4fcace79e9594" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.430992 4889 scope.go:117] "RemoveContainer" containerID="fd64288fdc055b1b763c5730ad8c45a2539a208bf63a18c6c7f0721e2b9508fb" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.454284 4889 scope.go:117] "RemoveContainer" containerID="0012a61ddc1ba62cde6a8248b5fdd03f750088e3a8e50ea4c5912c04f1f3e624" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.485600 4889 scope.go:117] "RemoveContainer" containerID="a3cf8b359e5f2daa8ae56b9273e1c2cfcd41a520a10500d23968f9027d5280a5" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.503147 4889 scope.go:117] "RemoveContainer" containerID="93cb5748e663fc4799cc331b48ae61183b634dd203239a64cdd7bbc3cd19d38a" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.532262 4889 scope.go:117] "RemoveContainer" containerID="f696dfe1fc9240b4c6f08cc823ad35ff28c6580eec8b16111f5f28555f902723" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.554479 4889 scope.go:117] "RemoveContainer" containerID="23f7858730740d33a3badd825c56dd3177715fcfdc6cdc319dbc0205c639dc27" Nov 28 07:14:14 crc kubenswrapper[4889]: I1128 07:14:14.593973 4889 scope.go:117] "RemoveContainer" containerID="80c6cc6184e8e38640ca0ca9b5302f442f97049739e7de3303b5b38510b2b6a5" Nov 28 07:14:58 crc kubenswrapper[4889]: I1128 07:14:58.783070 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:14:58 crc kubenswrapper[4889]: I1128 07:14:58.783647 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.161409 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv"] Nov 28 07:15:00 crc kubenswrapper[4889]: E1128 07:15:00.161699 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerName="extract-content" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.161733 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerName="extract-content" Nov 28 07:15:00 crc kubenswrapper[4889]: E1128 07:15:00.161753 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerName="extract-utilities" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.161760 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerName="extract-utilities" Nov 28 07:15:00 crc kubenswrapper[4889]: E1128 07:15:00.161780 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.161786 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.161950 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d97a0d9-0d4b-4d44-9ff4-15564b8fa04b" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.162605 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.169109 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.169781 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.180298 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv"] Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.338193 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88638856-c845-40d9-aa57-400310c33640-config-volume\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.338304 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc6fb\" (UniqueName: \"kubernetes.io/projected/88638856-c845-40d9-aa57-400310c33640-kube-api-access-mc6fb\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.338362 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88638856-c845-40d9-aa57-400310c33640-secret-volume\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.440373 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc6fb\" (UniqueName: \"kubernetes.io/projected/88638856-c845-40d9-aa57-400310c33640-kube-api-access-mc6fb\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.442066 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88638856-c845-40d9-aa57-400310c33640-secret-volume\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.442600 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88638856-c845-40d9-aa57-400310c33640-config-volume\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.444261 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88638856-c845-40d9-aa57-400310c33640-config-volume\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.447411 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88638856-c845-40d9-aa57-400310c33640-secret-volume\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.456528 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc6fb\" (UniqueName: \"kubernetes.io/projected/88638856-c845-40d9-aa57-400310c33640-kube-api-access-mc6fb\") pod \"collect-profiles-29405235-8fmmv\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.490491 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:00 crc kubenswrapper[4889]: I1128 07:15:00.889656 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv"] Nov 28 07:15:01 crc kubenswrapper[4889]: I1128 07:15:01.067256 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" event={"ID":"88638856-c845-40d9-aa57-400310c33640","Type":"ContainerStarted","Data":"0b25f744bad36f4ac0a848fbe33fad47961b868da19ecef308b4606247c14b65"} Nov 28 07:15:01 crc kubenswrapper[4889]: I1128 07:15:01.067311 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" event={"ID":"88638856-c845-40d9-aa57-400310c33640","Type":"ContainerStarted","Data":"d70720f26cdcf39e9041dd350b624021b9c7df4a3b466002f82ac535175503ae"} Nov 28 07:15:01 crc kubenswrapper[4889]: I1128 07:15:01.083737 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" podStartSLOduration=1.083720448 podStartE2EDuration="1.083720448s" podCreationTimestamp="2025-11-28 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:01.081071483 +0000 UTC m=+1624.051305648" watchObservedRunningTime="2025-11-28 07:15:01.083720448 +0000 UTC m=+1624.053954603" Nov 28 07:15:02 crc kubenswrapper[4889]: I1128 07:15:02.076427 4889 generic.go:334] "Generic (PLEG): container finished" podID="88638856-c845-40d9-aa57-400310c33640" containerID="0b25f744bad36f4ac0a848fbe33fad47961b868da19ecef308b4606247c14b65" exitCode=0 Nov 28 07:15:02 crc kubenswrapper[4889]: I1128 07:15:02.076542 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" event={"ID":"88638856-c845-40d9-aa57-400310c33640","Type":"ContainerDied","Data":"0b25f744bad36f4ac0a848fbe33fad47961b868da19ecef308b4606247c14b65"} Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.441570 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.589258 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88638856-c845-40d9-aa57-400310c33640-config-volume\") pod \"88638856-c845-40d9-aa57-400310c33640\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.589339 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc6fb\" (UniqueName: \"kubernetes.io/projected/88638856-c845-40d9-aa57-400310c33640-kube-api-access-mc6fb\") pod \"88638856-c845-40d9-aa57-400310c33640\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.589449 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88638856-c845-40d9-aa57-400310c33640-secret-volume\") pod \"88638856-c845-40d9-aa57-400310c33640\" (UID: \"88638856-c845-40d9-aa57-400310c33640\") " Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.590241 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88638856-c845-40d9-aa57-400310c33640-config-volume" (OuterVolumeSpecName: "config-volume") pod "88638856-c845-40d9-aa57-400310c33640" (UID: "88638856-c845-40d9-aa57-400310c33640"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.594188 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88638856-c845-40d9-aa57-400310c33640-kube-api-access-mc6fb" (OuterVolumeSpecName: "kube-api-access-mc6fb") pod "88638856-c845-40d9-aa57-400310c33640" (UID: "88638856-c845-40d9-aa57-400310c33640"). InnerVolumeSpecName "kube-api-access-mc6fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.594434 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88638856-c845-40d9-aa57-400310c33640-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "88638856-c845-40d9-aa57-400310c33640" (UID: "88638856-c845-40d9-aa57-400310c33640"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.691239 4889 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88638856-c845-40d9-aa57-400310c33640-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.691280 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc6fb\" (UniqueName: \"kubernetes.io/projected/88638856-c845-40d9-aa57-400310c33640-kube-api-access-mc6fb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:03 crc kubenswrapper[4889]: I1128 07:15:03.691295 4889 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88638856-c845-40d9-aa57-400310c33640-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:04 crc kubenswrapper[4889]: I1128 07:15:04.097388 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" event={"ID":"88638856-c845-40d9-aa57-400310c33640","Type":"ContainerDied","Data":"d70720f26cdcf39e9041dd350b624021b9c7df4a3b466002f82ac535175503ae"} Nov 28 07:15:04 crc kubenswrapper[4889]: I1128 07:15:04.097436 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d70720f26cdcf39e9041dd350b624021b9c7df4a3b466002f82ac535175503ae" Nov 28 07:15:04 crc kubenswrapper[4889]: I1128 07:15:04.097439 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-8fmmv" Nov 28 07:15:14 crc kubenswrapper[4889]: I1128 07:15:14.789182 4889 scope.go:117] "RemoveContainer" containerID="def89232890ff2ea1170bf03b014fd49855e7baececf04474b47909d8032e453" Nov 28 07:15:14 crc kubenswrapper[4889]: I1128 07:15:14.808884 4889 scope.go:117] "RemoveContainer" containerID="9fd6cb9711212f1b50db5fb86ef597f96c8e223cca5df078fa1c7ccf1975f3c1" Nov 28 07:15:14 crc kubenswrapper[4889]: I1128 07:15:14.833861 4889 scope.go:117] "RemoveContainer" containerID="8920163e2dc589b5513e696a85abe83032ed5431a721238125a0aa7d7e0d0f9b" Nov 28 07:15:14 crc kubenswrapper[4889]: I1128 07:15:14.864672 4889 scope.go:117] "RemoveContainer" containerID="cc02df9dd41cbbd4e51054169827a03deb88d838c9bdef4962c059f73afe61b1" Nov 28 07:15:14 crc kubenswrapper[4889]: I1128 07:15:14.889862 4889 scope.go:117] "RemoveContainer" containerID="88a22234953fdf7b7113f5a16ffb14c7f8e9a5558572a79a29816025d55b2843" Nov 28 07:15:14 crc kubenswrapper[4889]: I1128 07:15:14.914557 4889 scope.go:117] "RemoveContainer" containerID="75ed159e60c5103572c8fb3ecbae7b93d6b368514cd6572fd9c95be2410e5190" Nov 28 07:15:14 crc kubenswrapper[4889]: I1128 07:15:14.950508 4889 scope.go:117] "RemoveContainer" containerID="d18408781f5d06767c9d4579d6c659e932641952a74fa7191a3d835fe6de724a" Nov 28 07:15:15 crc kubenswrapper[4889]: I1128 07:15:15.021870 4889 scope.go:117] "RemoveContainer" containerID="4357abd1870bf9b98ddbe5b9e7cf569546cc20dde733492fd058d85e4252fbcd" Nov 28 07:15:15 crc kubenswrapper[4889]: I1128 07:15:15.088580 4889 scope.go:117] "RemoveContainer" containerID="3681738ee6ffdc99631fc7592669b27cf1b5051d8fb5d42ca06423a423f3298e" Nov 28 07:15:15 crc kubenswrapper[4889]: I1128 07:15:15.106770 4889 scope.go:117] "RemoveContainer" containerID="bc963ae674cb642cf73feedb96f166caf22e14565033105ea01efe00c81d6de0" Nov 28 07:15:15 crc kubenswrapper[4889]: I1128 07:15:15.144023 4889 scope.go:117] "RemoveContainer" containerID="6dc7556254073930e346ad003426e246a1fe721ea68cbc74809582204ec3e3ad" Nov 28 07:15:15 crc kubenswrapper[4889]: I1128 07:15:15.162202 4889 scope.go:117] "RemoveContainer" containerID="51667981de37a0e16560a734e0071aa71940bbf1daef40d427e6300ca8b6578b" Nov 28 07:15:28 crc kubenswrapper[4889]: I1128 07:15:28.782935 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:15:28 crc kubenswrapper[4889]: I1128 07:15:28.784369 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:15:58 crc kubenswrapper[4889]: I1128 07:15:58.782548 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:15:58 crc kubenswrapper[4889]: I1128 07:15:58.783951 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:15:58 crc kubenswrapper[4889]: I1128 07:15:58.784052 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" Nov 28 07:15:58 crc kubenswrapper[4889]: I1128 07:15:58.784541 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2"} pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:15:58 crc kubenswrapper[4889]: I1128 07:15:58.784683 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" containerID="cri-o://749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" gracePeriod=600 Nov 28 07:15:59 crc kubenswrapper[4889]: E1128 07:15:59.410854 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:15:59 crc kubenswrapper[4889]: I1128 07:15:59.570567 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" exitCode=0 Nov 28 07:15:59 crc kubenswrapper[4889]: I1128 07:15:59.570613 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerDied","Data":"749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2"} Nov 28 07:15:59 crc kubenswrapper[4889]: I1128 07:15:59.570648 4889 scope.go:117] "RemoveContainer" containerID="59b1be213e0c3af7ecbb85479735c5e364bee7085ba772a3db6c7ee269ef019c" Nov 28 07:15:59 crc kubenswrapper[4889]: I1128 07:15:59.571121 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:15:59 crc kubenswrapper[4889]: E1128 07:15:59.571375 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:16:15 crc kubenswrapper[4889]: I1128 07:16:15.330484 4889 scope.go:117] "RemoveContainer" containerID="e5eff60d8d8d77100dc3390741635f318a7f480049c275a81964aa7dfa36c631" Nov 28 07:16:15 crc kubenswrapper[4889]: I1128 07:16:15.332113 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:16:15 crc kubenswrapper[4889]: E1128 07:16:15.332369 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:16:15 crc kubenswrapper[4889]: I1128 07:16:15.383760 4889 scope.go:117] "RemoveContainer" containerID="d8b7dcde5ba3efb58f541995114142c3cd0d5e253a1a00051e1b96d3c29ecbaa" Nov 28 07:16:15 crc kubenswrapper[4889]: I1128 07:16:15.430170 4889 scope.go:117] "RemoveContainer" containerID="44b7db3182088e18f98c4148c5c8466a90e1d81667a7e51d563183b230863828" Nov 28 07:16:15 crc kubenswrapper[4889]: I1128 07:16:15.448676 4889 scope.go:117] "RemoveContainer" containerID="f74379e32f1e90e6851c1cc759b79adaa7b86d8d5bc59b8c75de94d7ca32b4cf" Nov 28 07:16:15 crc kubenswrapper[4889]: I1128 07:16:15.465456 4889 scope.go:117] "RemoveContainer" containerID="626370a08a887e0cf879525f92156e1e49617b4c81dc57238c397d3e1a16d956" Nov 28 07:16:26 crc kubenswrapper[4889]: I1128 07:16:26.331318 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:16:26 crc kubenswrapper[4889]: E1128 07:16:26.332011 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:16:37 crc kubenswrapper[4889]: I1128 07:16:37.335166 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:16:37 crc kubenswrapper[4889]: E1128 07:16:37.335981 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:16:48 crc kubenswrapper[4889]: I1128 07:16:48.331772 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:16:48 crc kubenswrapper[4889]: E1128 07:16:48.332478 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:17:01 crc kubenswrapper[4889]: I1128 07:17:01.332458 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:17:01 crc kubenswrapper[4889]: E1128 07:17:01.333890 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.050559 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tn7z2/must-gather-vpxpl"] Nov 28 07:17:12 crc kubenswrapper[4889]: E1128 07:17:12.051444 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88638856-c845-40d9-aa57-400310c33640" containerName="collect-profiles" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.051459 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="88638856-c845-40d9-aa57-400310c33640" containerName="collect-profiles" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.051643 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="88638856-c845-40d9-aa57-400310c33640" containerName="collect-profiles" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.052558 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.054511 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tn7z2"/"openshift-service-ca.crt" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.054528 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tn7z2"/"default-dockercfg-dsm72" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.055799 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tn7z2"/"kube-root-ca.crt" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.076814 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tn7z2/must-gather-vpxpl"] Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.226596 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23a74547-00a2-4b35-81ae-a81db6d72f91-must-gather-output\") pod \"must-gather-vpxpl\" (UID: \"23a74547-00a2-4b35-81ae-a81db6d72f91\") " pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.226701 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7bnx\" (UniqueName: \"kubernetes.io/projected/23a74547-00a2-4b35-81ae-a81db6d72f91-kube-api-access-q7bnx\") pod \"must-gather-vpxpl\" (UID: \"23a74547-00a2-4b35-81ae-a81db6d72f91\") " pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.328308 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7bnx\" (UniqueName: \"kubernetes.io/projected/23a74547-00a2-4b35-81ae-a81db6d72f91-kube-api-access-q7bnx\") pod \"must-gather-vpxpl\" (UID: \"23a74547-00a2-4b35-81ae-a81db6d72f91\") " pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.328384 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23a74547-00a2-4b35-81ae-a81db6d72f91-must-gather-output\") pod \"must-gather-vpxpl\" (UID: \"23a74547-00a2-4b35-81ae-a81db6d72f91\") " pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.328829 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23a74547-00a2-4b35-81ae-a81db6d72f91-must-gather-output\") pod \"must-gather-vpxpl\" (UID: \"23a74547-00a2-4b35-81ae-a81db6d72f91\") " pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.345438 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7bnx\" (UniqueName: \"kubernetes.io/projected/23a74547-00a2-4b35-81ae-a81db6d72f91-kube-api-access-q7bnx\") pod \"must-gather-vpxpl\" (UID: \"23a74547-00a2-4b35-81ae-a81db6d72f91\") " pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.370527 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.802437 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tn7z2/must-gather-vpxpl"] Nov 28 07:17:12 crc kubenswrapper[4889]: I1128 07:17:12.813624 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:17:13 crc kubenswrapper[4889]: I1128 07:17:13.149545 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" event={"ID":"23a74547-00a2-4b35-81ae-a81db6d72f91","Type":"ContainerStarted","Data":"f7dff5e2970f031041931c2d5276cae64b3e90cf14062ad4a4b8958cd8547118"} Nov 28 07:17:14 crc kubenswrapper[4889]: I1128 07:17:14.332147 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:17:14 crc kubenswrapper[4889]: E1128 07:17:14.332425 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:17:15 crc kubenswrapper[4889]: I1128 07:17:15.580952 4889 scope.go:117] "RemoveContainer" containerID="288b5735636195a57175f9729109000c6d28e61a65a01ad2ade3cb58e7243743" Nov 28 07:17:19 crc kubenswrapper[4889]: I1128 07:17:19.477895 4889 scope.go:117] "RemoveContainer" containerID="29d04d773589b050b9a77e90cdf11d2996f36460fa7d4f5ca93bba075ac8e4fd" Nov 28 07:17:19 crc kubenswrapper[4889]: I1128 07:17:19.535387 4889 scope.go:117] "RemoveContainer" containerID="46e2b61c8e6ecfe9ae9928060f1a929cb5525c2e321d7fb2129b0bb6ab9cc8a8" Nov 28 07:17:20 crc kubenswrapper[4889]: I1128 07:17:20.204893 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" event={"ID":"23a74547-00a2-4b35-81ae-a81db6d72f91","Type":"ContainerStarted","Data":"da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851"} Nov 28 07:17:20 crc kubenswrapper[4889]: I1128 07:17:20.205255 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" event={"ID":"23a74547-00a2-4b35-81ae-a81db6d72f91","Type":"ContainerStarted","Data":"9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8"} Nov 28 07:17:20 crc kubenswrapper[4889]: I1128 07:17:20.234510 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" podStartSLOduration=1.461918329 podStartE2EDuration="8.234453655s" podCreationTimestamp="2025-11-28 07:17:12 +0000 UTC" firstStartedPulling="2025-11-28 07:17:12.813427397 +0000 UTC m=+1755.783661552" lastFinishedPulling="2025-11-28 07:17:19.585962713 +0000 UTC m=+1762.556196878" observedRunningTime="2025-11-28 07:17:20.223877226 +0000 UTC m=+1763.194111451" watchObservedRunningTime="2025-11-28 07:17:20.234453655 +0000 UTC m=+1763.204687850" Nov 28 07:17:28 crc kubenswrapper[4889]: I1128 07:17:28.331560 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:17:28 crc kubenswrapper[4889]: E1128 07:17:28.332568 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:17:42 crc kubenswrapper[4889]: I1128 07:17:42.332124 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:17:42 crc kubenswrapper[4889]: E1128 07:17:42.332929 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:17:53 crc kubenswrapper[4889]: I1128 07:17:53.331381 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:17:53 crc kubenswrapper[4889]: E1128 07:17:53.333416 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:18:07 crc kubenswrapper[4889]: I1128 07:18:07.336769 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:18:07 crc kubenswrapper[4889]: E1128 07:18:07.337449 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.230855 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4_d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db/util/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.405497 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4_d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db/util/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.416344 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4_d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db/pull/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.426198 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4_d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db/pull/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.615135 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4_d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db/util/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.615444 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4_d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db/pull/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.655289 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbe7srm4_d8cecfd2-e3f3-46e2-aa8d-a4bab90a07db/extract/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.818269 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-rwzxg_0ee115df-19fd-4ca6-a087-9f4a56a86378/kube-rbac-proxy/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.892508 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-kvt48_363ed2cd-915f-4260-8eb7-950ff710b500/kube-rbac-proxy/0.log" Nov 28 07:18:10 crc kubenswrapper[4889]: I1128 07:18:10.921113 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-rwzxg_0ee115df-19fd-4ca6-a087-9f4a56a86378/manager/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.030829 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-kvt48_363ed2cd-915f-4260-8eb7-950ff710b500/manager/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.089990 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-mxn8f_b9d9668a-02e9-4d9f-856e-be23f0484ccf/manager/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.109038 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-mxn8f_b9d9668a-02e9-4d9f-856e-be23f0484ccf/kube-rbac-proxy/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.254403 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-qcklj_dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a/kube-rbac-proxy/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.381749 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-qcklj_dfbfa9a4-20f5-4c28-a4b6-4a12dd6b4d5a/manager/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.408888 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-dr6z8_d281dca0-e9e1-4e2d-befc-0508ae9421b9/kube-rbac-proxy/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.444162 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-dr6z8_d281dca0-e9e1-4e2d-befc-0508ae9421b9/manager/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.537973 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-lbvbd_178814bc-902e-43d9-a606-c3640477a94d/kube-rbac-proxy/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.572031 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-lbvbd_178814bc-902e-43d9-a606-c3640477a94d/manager/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.718064 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-lwdbd_559d7ec2-8cd6-4c5c-a844-c7f3953ec021/kube-rbac-proxy/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.838552 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-k5wp6_dcd06fe4-e876-4947-b5c6-812381c42b71/kube-rbac-proxy/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.870501 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-lwdbd_559d7ec2-8cd6-4c5c-a844-c7f3953ec021/manager/0.log" Nov 28 07:18:11 crc kubenswrapper[4889]: I1128 07:18:11.906248 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-k5wp6_dcd06fe4-e876-4947-b5c6-812381c42b71/manager/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.026974 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-mlwrn_1fec2494-e72e-4019-a869-b3080018f75d/kube-rbac-proxy/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.144271 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-mlwrn_1fec2494-e72e-4019-a869-b3080018f75d/manager/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.196410 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-svz4w_8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d/kube-rbac-proxy/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.232631 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-svz4w_8b08c9da-c161-4ca7-a50a-f70b7ee7ce7d/manager/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.338908 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-g6lns_670339cb-0ec6-48bc-b892-c14ad66849c0/kube-rbac-proxy/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.389532 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-g6lns_670339cb-0ec6-48bc-b892-c14ad66849c0/manager/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.470163 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-hds8d_50833719-605c-4e59-9535-7377eeb99994/kube-rbac-proxy/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.576494 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-hds8d_50833719-605c-4e59-9535-7377eeb99994/manager/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.616813 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-hd79s_1a7dd634-e6b8-435c-963b-e482cc1d0cac/kube-rbac-proxy/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.769614 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-hd79s_1a7dd634-e6b8-435c-963b-e482cc1d0cac/manager/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.816469 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-hmpqd_e81f775c-9ce2-415f-8bd3-ed49458ae893/kube-rbac-proxy/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.861692 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-hmpqd_e81f775c-9ce2-415f-8bd3-ed49458ae893/manager/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.973446 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc_a9f38b46-2bae-4e2d-8b02-c314b9e8f77a/manager/0.log" Nov 28 07:18:12 crc kubenswrapper[4889]: I1128 07:18:12.987100 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5d9f9695dbntxsc_a9f38b46-2bae-4e2d-8b02-c314b9e8f77a/kube-rbac-proxy/0.log" Nov 28 07:18:13 crc kubenswrapper[4889]: I1128 07:18:13.415024 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jvv8f_70cdc5d2-6373-4fe4-9b35-fcabe3f1853c/registry-server/0.log" Nov 28 07:18:13 crc kubenswrapper[4889]: I1128 07:18:13.470521 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-67d8f6cc56-8bp8m_2effd529-dd6f-4763-9f9e-585e03124be7/operator/0.log" Nov 28 07:18:13 crc kubenswrapper[4889]: I1128 07:18:13.596300 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-pzwp7_22d7b246-073e-4b87-81f8-04cb344e317c/kube-rbac-proxy/0.log" Nov 28 07:18:13 crc kubenswrapper[4889]: I1128 07:18:13.759955 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-pzwp7_22d7b246-073e-4b87-81f8-04cb344e317c/manager/0.log" Nov 28 07:18:13 crc kubenswrapper[4889]: I1128 07:18:13.825432 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-dp5mk_af524ba5-acaf-4f33-bb04-6c2818b1cdf5/kube-rbac-proxy/0.log" Nov 28 07:18:13 crc kubenswrapper[4889]: I1128 07:18:13.918537 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-dp5mk_af524ba5-acaf-4f33-bb04-6c2818b1cdf5/manager/0.log" Nov 28 07:18:13 crc kubenswrapper[4889]: I1128 07:18:13.982311 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6dwz7_37909eac-261b-42f4-b85e-14fd8f00c42b/operator/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.069676 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-66f75ddbcc-g24v8_efc28083-2792-41ee-a835-5953afb3070d/manager/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.122701 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-j9vh5_47734956-e3b4-4ca1-8f4b-490b2f861bf0/kube-rbac-proxy/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.199225 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-j9vh5_47734956-e3b4-4ca1-8f4b-490b2f861bf0/manager/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.270854 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-kp8mv_b531db0a-6f24-4a61-811a-d75de0f59e94/kube-rbac-proxy/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.287348 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-kp8mv_b531db0a-6f24-4a61-811a-d75de0f59e94/manager/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.399289 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-tcbth_f02b81e8-ad8a-445e-8ebd-156f05fdd9e7/kube-rbac-proxy/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.406310 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-tcbth_f02b81e8-ad8a-445e-8ebd-156f05fdd9e7/manager/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.471906 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-9swrr_729b51c8-1b36-4716-8c7a-ae23ed249f03/kube-rbac-proxy/0.log" Nov 28 07:18:14 crc kubenswrapper[4889]: I1128 07:18:14.535931 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-9swrr_729b51c8-1b36-4716-8c7a-ae23ed249f03/manager/0.log" Nov 28 07:18:19 crc kubenswrapper[4889]: I1128 07:18:19.668248 4889 scope.go:117] "RemoveContainer" containerID="1557253848b8173208cc1bcb66293e44e9523ff9fbd1b1b06ff1d6db2d81cb11" Nov 28 07:18:19 crc kubenswrapper[4889]: I1128 07:18:19.695268 4889 scope.go:117] "RemoveContainer" containerID="79ae825f91682d3ef9316a02a63bea53ce47014451ad8747ccf4c29d0f23bd3f" Nov 28 07:18:19 crc kubenswrapper[4889]: I1128 07:18:19.717922 4889 scope.go:117] "RemoveContainer" containerID="c8514c1d93d6758c1c17ee226579c728c34fd0e086a2444dc40c4d5d2304872e" Nov 28 07:18:19 crc kubenswrapper[4889]: I1128 07:18:19.747859 4889 scope.go:117] "RemoveContainer" containerID="ccd9a3451af7543863610a3361f75c597a7f1f94847eeb509c2590583ff9a2bb" Nov 28 07:18:19 crc kubenswrapper[4889]: I1128 07:18:19.769828 4889 scope.go:117] "RemoveContainer" containerID="4df87c7cf9cdf092f6514707cb04ba80cb0fb8d948bf9c9993b17b16cfe34085" Nov 28 07:18:19 crc kubenswrapper[4889]: I1128 07:18:19.790467 4889 scope.go:117] "RemoveContainer" containerID="258eebe34d9ee1f7f4a1d22b1cba432730424374a547da27abf949d733c647d0" Nov 28 07:18:19 crc kubenswrapper[4889]: I1128 07:18:19.812138 4889 scope.go:117] "RemoveContainer" containerID="b03a3bf0facdd489729d5f0987b893b7764a5dde678763f6b8bcdbcc09c73388" Nov 28 07:18:21 crc kubenswrapper[4889]: I1128 07:18:21.332450 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:18:21 crc kubenswrapper[4889]: E1128 07:18:21.333085 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:18:31 crc kubenswrapper[4889]: I1128 07:18:31.520738 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-r2h7q_9505fe40-d6f4-40f5-b555-486eddeeefd5/control-plane-machine-set-operator/0.log" Nov 28 07:18:31 crc kubenswrapper[4889]: I1128 07:18:31.677822 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hn9w9_a917d9bc-242b-4537-b454-edab3a6da7d6/kube-rbac-proxy/0.log" Nov 28 07:18:31 crc kubenswrapper[4889]: I1128 07:18:31.696873 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hn9w9_a917d9bc-242b-4537-b454-edab3a6da7d6/machine-api-operator/0.log" Nov 28 07:18:33 crc kubenswrapper[4889]: I1128 07:18:33.331746 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:18:33 crc kubenswrapper[4889]: E1128 07:18:33.332330 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:18:43 crc kubenswrapper[4889]: I1128 07:18:43.033120 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-fzbjf_0220baa2-0242-482e-a078-e466f273d0f0/cert-manager-controller/0.log" Nov 28 07:18:43 crc kubenswrapper[4889]: I1128 07:18:43.174051 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-9cclj_bd5092c5-0e74-4f68-a2cd-033dc52f1e01/cert-manager-cainjector/0.log" Nov 28 07:18:43 crc kubenswrapper[4889]: I1128 07:18:43.232675 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-6pv9d_9ff59aa7-f908-4b9d-bbfd-7e8bedd07ee5/cert-manager-webhook/0.log" Nov 28 07:18:48 crc kubenswrapper[4889]: I1128 07:18:48.331735 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:18:48 crc kubenswrapper[4889]: E1128 07:18:48.332336 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:18:54 crc kubenswrapper[4889]: I1128 07:18:54.978182 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-2jkx8_193c905f-411f-4fa6-bbfd-83039c4d3d8b/nmstate-console-plugin/0.log" Nov 28 07:18:55 crc kubenswrapper[4889]: I1128 07:18:55.151718 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-dlpnc_612daf5f-d1e1-4aa9-b972-9d8ab3ea3211/kube-rbac-proxy/0.log" Nov 28 07:18:55 crc kubenswrapper[4889]: I1128 07:18:55.175164 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8pw8z_cf16260c-c349-4586-a9db-278bbf0cbb99/nmstate-handler/0.log" Nov 28 07:18:55 crc kubenswrapper[4889]: I1128 07:18:55.245043 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-dlpnc_612daf5f-d1e1-4aa9-b972-9d8ab3ea3211/nmstate-metrics/0.log" Nov 28 07:18:55 crc kubenswrapper[4889]: I1128 07:18:55.364456 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-kbqpw_d804fabb-6387-4eef-a102-c35754398811/nmstate-operator/0.log" Nov 28 07:18:55 crc kubenswrapper[4889]: I1128 07:18:55.447132 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-n2k6g_6b4620ef-3cb6-45a5-8787-58e934465bac/nmstate-webhook/0.log" Nov 28 07:19:03 crc kubenswrapper[4889]: I1128 07:19:03.332379 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:19:03 crc kubenswrapper[4889]: E1128 07:19:03.333098 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:19:08 crc kubenswrapper[4889]: I1128 07:19:08.784212 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-h5xvn_63880bcb-6dcc-4936-a476-c3622733a4cf/kube-rbac-proxy/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.052557 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-frr-files/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.129782 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-reloader/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.129881 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-frr-files/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.139848 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-h5xvn_63880bcb-6dcc-4936-a476-c3622733a4cf/controller/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.266731 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-metrics/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.286186 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-reloader/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.450231 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-frr-files/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.488560 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-metrics/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.489019 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-metrics/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.505658 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-reloader/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.691948 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-reloader/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.698968 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-frr-files/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.720588 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/cp-metrics/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.755396 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/controller/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.856275 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/frr-metrics/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.928306 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/kube-rbac-proxy/0.log" Nov 28 07:19:09 crc kubenswrapper[4889]: I1128 07:19:09.947774 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/kube-rbac-proxy-frr/0.log" Nov 28 07:19:10 crc kubenswrapper[4889]: I1128 07:19:10.084216 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/reloader/0.log" Nov 28 07:19:10 crc kubenswrapper[4889]: I1128 07:19:10.150009 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-9nk2m_6376e2a1-c497-4e4f-a962-4b7af74a0cbb/frr-k8s-webhook-server/0.log" Nov 28 07:19:10 crc kubenswrapper[4889]: I1128 07:19:10.381836 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69d9449997-wlhbq_f6ef069d-811d-4f18-a4e9-d7fa63b0096f/manager/0.log" Nov 28 07:19:10 crc kubenswrapper[4889]: I1128 07:19:10.580431 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f6b649f7b-vt4wl_e8754ebc-1d87-4dfb-ac08-9c010fbe8109/webhook-server/0.log" Nov 28 07:19:10 crc kubenswrapper[4889]: I1128 07:19:10.621802 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4sdtt_f466b540-ed9d-495d-8cf2-e6879ab71d05/kube-rbac-proxy/0.log" Nov 28 07:19:10 crc kubenswrapper[4889]: I1128 07:19:10.753964 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-267hv_6cbe65b7-1028-430c-a03b-48ecae8cd4e6/frr/0.log" Nov 28 07:19:11 crc kubenswrapper[4889]: I1128 07:19:11.097370 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4sdtt_f466b540-ed9d-495d-8cf2-e6879ab71d05/speaker/0.log" Nov 28 07:19:14 crc kubenswrapper[4889]: I1128 07:19:14.332579 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:19:14 crc kubenswrapper[4889]: E1128 07:19:14.333290 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:19:23 crc kubenswrapper[4889]: I1128 07:19:23.310272 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7_950186ee-ac42-4e8b-b946-437c6c9d3c0b/util/0.log" Nov 28 07:19:23 crc kubenswrapper[4889]: I1128 07:19:23.448583 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7_950186ee-ac42-4e8b-b946-437c6c9d3c0b/util/0.log" Nov 28 07:19:23 crc kubenswrapper[4889]: I1128 07:19:23.539577 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7_950186ee-ac42-4e8b-b946-437c6c9d3c0b/pull/0.log" Nov 28 07:19:23 crc kubenswrapper[4889]: I1128 07:19:23.543488 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7_950186ee-ac42-4e8b-b946-437c6c9d3c0b/pull/0.log" Nov 28 07:19:23 crc kubenswrapper[4889]: I1128 07:19:23.709735 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7_950186ee-ac42-4e8b-b946-437c6c9d3c0b/pull/0.log" Nov 28 07:19:23 crc kubenswrapper[4889]: I1128 07:19:23.710689 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7_950186ee-ac42-4e8b-b946-437c6c9d3c0b/extract/0.log" Nov 28 07:19:23 crc kubenswrapper[4889]: I1128 07:19:23.726286 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar4hl7_950186ee-ac42-4e8b-b946-437c6c9d3c0b/util/0.log" Nov 28 07:19:23 crc kubenswrapper[4889]: I1128 07:19:23.847922 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb_f34b29e6-fe3f-4bf4-9e80-3bd54e012e48/util/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.006004 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb_f34b29e6-fe3f-4bf4-9e80-3bd54e012e48/util/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.041120 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb_f34b29e6-fe3f-4bf4-9e80-3bd54e012e48/pull/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.056057 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb_f34b29e6-fe3f-4bf4-9e80-3bd54e012e48/pull/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.191891 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb_f34b29e6-fe3f-4bf4-9e80-3bd54e012e48/pull/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.200104 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb_f34b29e6-fe3f-4bf4-9e80-3bd54e012e48/extract/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.200331 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f64kzb_f34b29e6-fe3f-4bf4-9e80-3bd54e012e48/util/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.349229 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5_5150180d-3afe-4c23-bfaa-8695d64fc2f9/util/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.477854 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5_5150180d-3afe-4c23-bfaa-8695d64fc2f9/util/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.506755 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5_5150180d-3afe-4c23-bfaa-8695d64fc2f9/pull/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.533460 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5_5150180d-3afe-4c23-bfaa-8695d64fc2f9/pull/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.703960 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5_5150180d-3afe-4c23-bfaa-8695d64fc2f9/extract/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.719063 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5_5150180d-3afe-4c23-bfaa-8695d64fc2f9/util/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.763687 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qkll5_5150180d-3afe-4c23-bfaa-8695d64fc2f9/pull/0.log" Nov 28 07:19:24 crc kubenswrapper[4889]: I1128 07:19:24.903894 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5tscg_e790ac24-fca9-4d15-942d-2469cdf17620/extract-utilities/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.163800 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5tscg_e790ac24-fca9-4d15-942d-2469cdf17620/extract-utilities/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.175478 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5tscg_e790ac24-fca9-4d15-942d-2469cdf17620/extract-content/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.181580 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5tscg_e790ac24-fca9-4d15-942d-2469cdf17620/extract-content/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.309363 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5tscg_e790ac24-fca9-4d15-942d-2469cdf17620/extract-utilities/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.317429 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5tscg_e790ac24-fca9-4d15-942d-2469cdf17620/extract-content/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.531884 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zvwjp_94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7/extract-utilities/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.708265 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zvwjp_94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7/extract-content/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.709526 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5tscg_e790ac24-fca9-4d15-942d-2469cdf17620/registry-server/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.738854 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zvwjp_94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7/extract-content/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.756673 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zvwjp_94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7/extract-utilities/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.918825 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zvwjp_94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7/extract-utilities/0.log" Nov 28 07:19:25 crc kubenswrapper[4889]: I1128 07:19:25.962673 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zvwjp_94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7/extract-content/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.128103 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ktxlv_93f9f385-e809-4bca-b770-f6967eaa5578/marketplace-operator/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.231485 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zvwjp_94cbc9b6-1be5-4d8f-8fe2-fe4c191b45d7/registry-server/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.254981 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w7bqh_e35511fa-effe-470c-bb25-f144f1e21248/extract-utilities/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.405999 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w7bqh_e35511fa-effe-470c-bb25-f144f1e21248/extract-content/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.416505 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w7bqh_e35511fa-effe-470c-bb25-f144f1e21248/extract-content/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.437555 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w7bqh_e35511fa-effe-470c-bb25-f144f1e21248/extract-utilities/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.571751 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w7bqh_e35511fa-effe-470c-bb25-f144f1e21248/extract-utilities/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.584480 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w7bqh_e35511fa-effe-470c-bb25-f144f1e21248/extract-content/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.670584 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w7bqh_e35511fa-effe-470c-bb25-f144f1e21248/registry-server/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.727297 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vgl8h_7f2d783e-1610-4d7a-b93b-8c840dba16b6/extract-utilities/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.909913 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vgl8h_7f2d783e-1610-4d7a-b93b-8c840dba16b6/extract-utilities/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.914319 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vgl8h_7f2d783e-1610-4d7a-b93b-8c840dba16b6/extract-content/0.log" Nov 28 07:19:26 crc kubenswrapper[4889]: I1128 07:19:26.945554 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vgl8h_7f2d783e-1610-4d7a-b93b-8c840dba16b6/extract-content/0.log" Nov 28 07:19:27 crc kubenswrapper[4889]: I1128 07:19:27.104478 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vgl8h_7f2d783e-1610-4d7a-b93b-8c840dba16b6/extract-utilities/0.log" Nov 28 07:19:27 crc kubenswrapper[4889]: I1128 07:19:27.108764 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vgl8h_7f2d783e-1610-4d7a-b93b-8c840dba16b6/extract-content/0.log" Nov 28 07:19:27 crc kubenswrapper[4889]: I1128 07:19:27.406884 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vgl8h_7f2d783e-1610-4d7a-b93b-8c840dba16b6/registry-server/0.log" Nov 28 07:19:28 crc kubenswrapper[4889]: I1128 07:19:28.332201 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:19:28 crc kubenswrapper[4889]: E1128 07:19:28.332418 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:19:41 crc kubenswrapper[4889]: I1128 07:19:41.331865 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:19:41 crc kubenswrapper[4889]: E1128 07:19:41.332530 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:19:54 crc kubenswrapper[4889]: I1128 07:19:54.331345 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:19:54 crc kubenswrapper[4889]: E1128 07:19:54.332059 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:20:07 crc kubenswrapper[4889]: I1128 07:20:07.337815 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:20:07 crc kubenswrapper[4889]: E1128 07:20:07.339118 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:20:18 crc kubenswrapper[4889]: I1128 07:20:18.332941 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:20:18 crc kubenswrapper[4889]: E1128 07:20:18.334056 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:20:29 crc kubenswrapper[4889]: I1128 07:20:29.612525 4889 generic.go:334] "Generic (PLEG): container finished" podID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerID="9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8" exitCode=0 Nov 28 07:20:29 crc kubenswrapper[4889]: I1128 07:20:29.612669 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" event={"ID":"23a74547-00a2-4b35-81ae-a81db6d72f91","Type":"ContainerDied","Data":"9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8"} Nov 28 07:20:29 crc kubenswrapper[4889]: I1128 07:20:29.614329 4889 scope.go:117] "RemoveContainer" containerID="9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8" Nov 28 07:20:30 crc kubenswrapper[4889]: I1128 07:20:30.489663 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tn7z2_must-gather-vpxpl_23a74547-00a2-4b35-81ae-a81db6d72f91/gather/0.log" Nov 28 07:20:32 crc kubenswrapper[4889]: I1128 07:20:32.331780 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:20:32 crc kubenswrapper[4889]: E1128 07:20:32.332620 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.120987 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tn7z2/must-gather-vpxpl"] Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.121574 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" podUID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerName="copy" containerID="cri-o://da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851" gracePeriod=2 Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.126493 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tn7z2/must-gather-vpxpl"] Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.472077 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tn7z2_must-gather-vpxpl_23a74547-00a2-4b35-81ae-a81db6d72f91/copy/0.log" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.472755 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.606860 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23a74547-00a2-4b35-81ae-a81db6d72f91-must-gather-output\") pod \"23a74547-00a2-4b35-81ae-a81db6d72f91\" (UID: \"23a74547-00a2-4b35-81ae-a81db6d72f91\") " Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.606950 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7bnx\" (UniqueName: \"kubernetes.io/projected/23a74547-00a2-4b35-81ae-a81db6d72f91-kube-api-access-q7bnx\") pod \"23a74547-00a2-4b35-81ae-a81db6d72f91\" (UID: \"23a74547-00a2-4b35-81ae-a81db6d72f91\") " Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.619106 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a74547-00a2-4b35-81ae-a81db6d72f91-kube-api-access-q7bnx" (OuterVolumeSpecName: "kube-api-access-q7bnx") pod "23a74547-00a2-4b35-81ae-a81db6d72f91" (UID: "23a74547-00a2-4b35-81ae-a81db6d72f91"). InnerVolumeSpecName "kube-api-access-q7bnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.681319 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tn7z2_must-gather-vpxpl_23a74547-00a2-4b35-81ae-a81db6d72f91/copy/0.log" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.681612 4889 generic.go:334] "Generic (PLEG): container finished" podID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerID="da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851" exitCode=143 Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.681661 4889 scope.go:117] "RemoveContainer" containerID="da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.681728 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tn7z2/must-gather-vpxpl" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.700043 4889 scope.go:117] "RemoveContainer" containerID="9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.707491 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23a74547-00a2-4b35-81ae-a81db6d72f91-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "23a74547-00a2-4b35-81ae-a81db6d72f91" (UID: "23a74547-00a2-4b35-81ae-a81db6d72f91"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.708673 4889 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23a74547-00a2-4b35-81ae-a81db6d72f91-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.708699 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7bnx\" (UniqueName: \"kubernetes.io/projected/23a74547-00a2-4b35-81ae-a81db6d72f91-kube-api-access-q7bnx\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.742717 4889 scope.go:117] "RemoveContainer" containerID="da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851" Nov 28 07:20:37 crc kubenswrapper[4889]: E1128 07:20:37.743270 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851\": container with ID starting with da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851 not found: ID does not exist" containerID="da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.743328 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851"} err="failed to get container status \"da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851\": rpc error: code = NotFound desc = could not find container \"da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851\": container with ID starting with da52a88bde4460902b9f4520fe8b0b6b5333dd6ea827bfb9edcc49c855d4a851 not found: ID does not exist" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.743356 4889 scope.go:117] "RemoveContainer" containerID="9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8" Nov 28 07:20:37 crc kubenswrapper[4889]: E1128 07:20:37.743812 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8\": container with ID starting with 9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8 not found: ID does not exist" containerID="9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8" Nov 28 07:20:37 crc kubenswrapper[4889]: I1128 07:20:37.743843 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8"} err="failed to get container status \"9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8\": rpc error: code = NotFound desc = could not find container \"9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8\": container with ID starting with 9ba1758ecbe4ffa68f3c13b735381a4e0a418641c00cb0746341e95942e5d6f8 not found: ID does not exist" Nov 28 07:20:39 crc kubenswrapper[4889]: I1128 07:20:39.341657 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a74547-00a2-4b35-81ae-a81db6d72f91" path="/var/lib/kubelet/pods/23a74547-00a2-4b35-81ae-a81db6d72f91/volumes" Nov 28 07:20:47 crc kubenswrapper[4889]: I1128 07:20:47.339465 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:20:47 crc kubenswrapper[4889]: E1128 07:20:47.340796 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwbr9_openshift-machine-config-operator(6a6707da-48a9-4e38-a1b2-df82148f0cd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.372472 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n5bj2"] Nov 28 07:20:55 crc kubenswrapper[4889]: E1128 07:20:55.373763 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerName="copy" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.373782 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerName="copy" Nov 28 07:20:55 crc kubenswrapper[4889]: E1128 07:20:55.373818 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerName="gather" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.373826 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerName="gather" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.373966 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerName="gather" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.373983 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a74547-00a2-4b35-81ae-a81db6d72f91" containerName="copy" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.375211 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.391670 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5bj2"] Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.508785 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58s7\" (UniqueName: \"kubernetes.io/projected/d96a1218-c158-4f37-8a5d-7b725056eeb6-kube-api-access-f58s7\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.508903 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-catalog-content\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.508968 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-utilities\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.610388 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-catalog-content\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.610678 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-utilities\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.610912 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f58s7\" (UniqueName: \"kubernetes.io/projected/d96a1218-c158-4f37-8a5d-7b725056eeb6-kube-api-access-f58s7\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.610943 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-catalog-content\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.611125 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-utilities\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.628413 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f58s7\" (UniqueName: \"kubernetes.io/projected/d96a1218-c158-4f37-8a5d-7b725056eeb6-kube-api-access-f58s7\") pod \"redhat-operators-n5bj2\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.696101 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:20:55 crc kubenswrapper[4889]: I1128 07:20:55.939787 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5bj2"] Nov 28 07:20:56 crc kubenswrapper[4889]: I1128 07:20:56.845006 4889 generic.go:334] "Generic (PLEG): container finished" podID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerID="9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e" exitCode=0 Nov 28 07:20:56 crc kubenswrapper[4889]: I1128 07:20:56.845092 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5bj2" event={"ID":"d96a1218-c158-4f37-8a5d-7b725056eeb6","Type":"ContainerDied","Data":"9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e"} Nov 28 07:20:56 crc kubenswrapper[4889]: I1128 07:20:56.845459 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5bj2" event={"ID":"d96a1218-c158-4f37-8a5d-7b725056eeb6","Type":"ContainerStarted","Data":"b36bb3fb397344a97b9b7c8df0c871e5b7f001fef4d3702285ea3f363551d653"} Nov 28 07:20:57 crc kubenswrapper[4889]: I1128 07:20:57.854964 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5bj2" event={"ID":"d96a1218-c158-4f37-8a5d-7b725056eeb6","Type":"ContainerStarted","Data":"e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc"} Nov 28 07:20:58 crc kubenswrapper[4889]: I1128 07:20:58.864690 4889 generic.go:334] "Generic (PLEG): container finished" podID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerID="e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc" exitCode=0 Nov 28 07:20:58 crc kubenswrapper[4889]: I1128 07:20:58.864765 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5bj2" event={"ID":"d96a1218-c158-4f37-8a5d-7b725056eeb6","Type":"ContainerDied","Data":"e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc"} Nov 28 07:20:59 crc kubenswrapper[4889]: I1128 07:20:59.332173 4889 scope.go:117] "RemoveContainer" containerID="749c041ab466359508795528f167740f895af96ba71707a65bbef20fad514bd2" Nov 28 07:20:59 crc kubenswrapper[4889]: I1128 07:20:59.872503 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" event={"ID":"6a6707da-48a9-4e38-a1b2-df82148f0cd2","Type":"ContainerStarted","Data":"0d2720b8b8fb070afe5087f2eeeb4593bf0fd686256ac3c2add5f9d482d880d4"} Nov 28 07:20:59 crc kubenswrapper[4889]: I1128 07:20:59.875680 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5bj2" event={"ID":"d96a1218-c158-4f37-8a5d-7b725056eeb6","Type":"ContainerStarted","Data":"cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206"} Nov 28 07:21:05 crc kubenswrapper[4889]: I1128 07:21:05.696994 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:21:05 crc kubenswrapper[4889]: I1128 07:21:05.698764 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:21:05 crc kubenswrapper[4889]: I1128 07:21:05.739647 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:21:05 crc kubenswrapper[4889]: I1128 07:21:05.764153 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n5bj2" podStartSLOduration=8.266998186 podStartE2EDuration="10.764134416s" podCreationTimestamp="2025-11-28 07:20:55 +0000 UTC" firstStartedPulling="2025-11-28 07:20:56.847022259 +0000 UTC m=+1979.817256454" lastFinishedPulling="2025-11-28 07:20:59.344158529 +0000 UTC m=+1982.314392684" observedRunningTime="2025-11-28 07:20:59.948964919 +0000 UTC m=+1982.919199064" watchObservedRunningTime="2025-11-28 07:21:05.764134416 +0000 UTC m=+1988.734368581" Nov 28 07:21:05 crc kubenswrapper[4889]: I1128 07:21:05.978622 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:21:06 crc kubenswrapper[4889]: I1128 07:21:06.026300 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5bj2"] Nov 28 07:21:07 crc kubenswrapper[4889]: I1128 07:21:07.940615 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n5bj2" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerName="registry-server" containerID="cri-o://cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206" gracePeriod=2 Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.293156 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.410594 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-catalog-content\") pod \"d96a1218-c158-4f37-8a5d-7b725056eeb6\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.410776 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-utilities\") pod \"d96a1218-c158-4f37-8a5d-7b725056eeb6\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.410808 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f58s7\" (UniqueName: \"kubernetes.io/projected/d96a1218-c158-4f37-8a5d-7b725056eeb6-kube-api-access-f58s7\") pod \"d96a1218-c158-4f37-8a5d-7b725056eeb6\" (UID: \"d96a1218-c158-4f37-8a5d-7b725056eeb6\") " Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.416539 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-utilities" (OuterVolumeSpecName: "utilities") pod "d96a1218-c158-4f37-8a5d-7b725056eeb6" (UID: "d96a1218-c158-4f37-8a5d-7b725056eeb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.419562 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96a1218-c158-4f37-8a5d-7b725056eeb6-kube-api-access-f58s7" (OuterVolumeSpecName: "kube-api-access-f58s7") pod "d96a1218-c158-4f37-8a5d-7b725056eeb6" (UID: "d96a1218-c158-4f37-8a5d-7b725056eeb6"). InnerVolumeSpecName "kube-api-access-f58s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.512918 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.512973 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f58s7\" (UniqueName: \"kubernetes.io/projected/d96a1218-c158-4f37-8a5d-7b725056eeb6-kube-api-access-f58s7\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.953082 4889 generic.go:334] "Generic (PLEG): container finished" podID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerID="cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206" exitCode=0 Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.953139 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5bj2" event={"ID":"d96a1218-c158-4f37-8a5d-7b725056eeb6","Type":"ContainerDied","Data":"cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206"} Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.953175 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5bj2" event={"ID":"d96a1218-c158-4f37-8a5d-7b725056eeb6","Type":"ContainerDied","Data":"b36bb3fb397344a97b9b7c8df0c871e5b7f001fef4d3702285ea3f363551d653"} Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.953185 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5bj2" Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.953195 4889 scope.go:117] "RemoveContainer" containerID="cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206" Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.975081 4889 scope.go:117] "RemoveContainer" containerID="e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc" Nov 28 07:21:08 crc kubenswrapper[4889]: I1128 07:21:08.992314 4889 scope.go:117] "RemoveContainer" containerID="9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.015461 4889 scope.go:117] "RemoveContainer" containerID="cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206" Nov 28 07:21:09 crc kubenswrapper[4889]: E1128 07:21:09.015861 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206\": container with ID starting with cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206 not found: ID does not exist" containerID="cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.015920 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206"} err="failed to get container status \"cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206\": rpc error: code = NotFound desc = could not find container \"cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206\": container with ID starting with cf4445319d21d4c8a06ecfbf979f80e96d955ec170cd467fd60ae5fee5f06206 not found: ID does not exist" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.015942 4889 scope.go:117] "RemoveContainer" containerID="e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc" Nov 28 07:21:09 crc kubenswrapper[4889]: E1128 07:21:09.016162 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc\": container with ID starting with e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc not found: ID does not exist" containerID="e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.016186 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc"} err="failed to get container status \"e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc\": rpc error: code = NotFound desc = could not find container \"e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc\": container with ID starting with e74218ac292a8c3e83627dc128cccd6eebfccd47241dc4eb4cad375136d2ecdc not found: ID does not exist" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.016201 4889 scope.go:117] "RemoveContainer" containerID="9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e" Nov 28 07:21:09 crc kubenswrapper[4889]: E1128 07:21:09.016408 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e\": container with ID starting with 9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e not found: ID does not exist" containerID="9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.016429 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e"} err="failed to get container status \"9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e\": rpc error: code = NotFound desc = could not find container \"9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e\": container with ID starting with 9ec5ab7b8fd8e4535b45bb9aa79cd56cf5a97c9926403a5eace166a81190c61e not found: ID does not exist" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.210963 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d96a1218-c158-4f37-8a5d-7b725056eeb6" (UID: "d96a1218-c158-4f37-8a5d-7b725056eeb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.228712 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96a1218-c158-4f37-8a5d-7b725056eeb6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.298630 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5bj2"] Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.306895 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n5bj2"] Nov 28 07:21:09 crc kubenswrapper[4889]: I1128 07:21:09.344945 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" path="/var/lib/kubelet/pods/d96a1218-c158-4f37-8a5d-7b725056eeb6/volumes" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.159137 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jlsnv"] Nov 28 07:22:13 crc kubenswrapper[4889]: E1128 07:22:13.160003 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerName="extract-utilities" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.160018 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerName="extract-utilities" Nov 28 07:22:13 crc kubenswrapper[4889]: E1128 07:22:13.160035 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerName="extract-content" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.160045 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerName="extract-content" Nov 28 07:22:13 crc kubenswrapper[4889]: E1128 07:22:13.160077 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerName="registry-server" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.160086 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerName="registry-server" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.160236 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96a1218-c158-4f37-8a5d-7b725056eeb6" containerName="registry-server" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.161442 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.183958 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlsnv"] Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.287569 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wnl\" (UniqueName: \"kubernetes.io/projected/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-kube-api-access-r8wnl\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.287622 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-catalog-content\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.287798 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-utilities\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.389165 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8wnl\" (UniqueName: \"kubernetes.io/projected/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-kube-api-access-r8wnl\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.389449 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-catalog-content\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.389632 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-utilities\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.390020 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-catalog-content\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.390088 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-utilities\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.411518 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8wnl\" (UniqueName: \"kubernetes.io/projected/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-kube-api-access-r8wnl\") pod \"redhat-marketplace-jlsnv\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.503018 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:13 crc kubenswrapper[4889]: I1128 07:22:13.965372 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlsnv"] Nov 28 07:22:14 crc kubenswrapper[4889]: I1128 07:22:14.842504 4889 generic.go:334] "Generic (PLEG): container finished" podID="f152c7fe-61f9-4a40-a1c1-6ac5daf5250c" containerID="8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0" exitCode=0 Nov 28 07:22:14 crc kubenswrapper[4889]: I1128 07:22:14.842560 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlsnv" event={"ID":"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c","Type":"ContainerDied","Data":"8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0"} Nov 28 07:22:14 crc kubenswrapper[4889]: I1128 07:22:14.842820 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlsnv" event={"ID":"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c","Type":"ContainerStarted","Data":"e462a30861d0cb74a11f70ebb729b847822f0504b8e49d4035fa568943170666"} Nov 28 07:22:14 crc kubenswrapper[4889]: I1128 07:22:14.844226 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:22:15 crc kubenswrapper[4889]: I1128 07:22:15.850375 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlsnv" event={"ID":"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c","Type":"ContainerStarted","Data":"c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477"} Nov 28 07:22:16 crc kubenswrapper[4889]: I1128 07:22:16.859354 4889 generic.go:334] "Generic (PLEG): container finished" podID="f152c7fe-61f9-4a40-a1c1-6ac5daf5250c" containerID="c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477" exitCode=0 Nov 28 07:22:16 crc kubenswrapper[4889]: I1128 07:22:16.859406 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlsnv" event={"ID":"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c","Type":"ContainerDied","Data":"c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477"} Nov 28 07:22:17 crc kubenswrapper[4889]: I1128 07:22:17.868079 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlsnv" event={"ID":"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c","Type":"ContainerStarted","Data":"f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d"} Nov 28 07:22:17 crc kubenswrapper[4889]: I1128 07:22:17.890288 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jlsnv" podStartSLOduration=2.418002096 podStartE2EDuration="4.890262617s" podCreationTimestamp="2025-11-28 07:22:13 +0000 UTC" firstStartedPulling="2025-11-28 07:22:14.843988489 +0000 UTC m=+2057.814222644" lastFinishedPulling="2025-11-28 07:22:17.31624902 +0000 UTC m=+2060.286483165" observedRunningTime="2025-11-28 07:22:17.882459613 +0000 UTC m=+2060.852693768" watchObservedRunningTime="2025-11-28 07:22:17.890262617 +0000 UTC m=+2060.860496782" Nov 28 07:22:23 crc kubenswrapper[4889]: I1128 07:22:23.503201 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:23 crc kubenswrapper[4889]: I1128 07:22:23.503830 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:23 crc kubenswrapper[4889]: I1128 07:22:23.569627 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:24 crc kubenswrapper[4889]: I1128 07:22:23.990615 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:24 crc kubenswrapper[4889]: I1128 07:22:24.055237 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlsnv"] Nov 28 07:22:25 crc kubenswrapper[4889]: I1128 07:22:25.946420 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jlsnv" podUID="f152c7fe-61f9-4a40-a1c1-6ac5daf5250c" containerName="registry-server" containerID="cri-o://f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d" gracePeriod=2 Nov 28 07:22:26 crc kubenswrapper[4889]: I1128 07:22:26.946167 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:26 crc kubenswrapper[4889]: I1128 07:22:26.956403 4889 generic.go:334] "Generic (PLEG): container finished" podID="f152c7fe-61f9-4a40-a1c1-6ac5daf5250c" containerID="f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d" exitCode=0 Nov 28 07:22:26 crc kubenswrapper[4889]: I1128 07:22:26.956493 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlsnv" event={"ID":"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c","Type":"ContainerDied","Data":"f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d"} Nov 28 07:22:26 crc kubenswrapper[4889]: I1128 07:22:26.956535 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlsnv" Nov 28 07:22:26 crc kubenswrapper[4889]: I1128 07:22:26.956563 4889 scope.go:117] "RemoveContainer" containerID="f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d" Nov 28 07:22:26 crc kubenswrapper[4889]: I1128 07:22:26.956548 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlsnv" event={"ID":"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c","Type":"ContainerDied","Data":"e462a30861d0cb74a11f70ebb729b847822f0504b8e49d4035fa568943170666"} Nov 28 07:22:26 crc kubenswrapper[4889]: I1128 07:22:26.986176 4889 scope.go:117] "RemoveContainer" containerID="c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.008769 4889 scope.go:117] "RemoveContainer" containerID="8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.031494 4889 scope.go:117] "RemoveContainer" containerID="f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d" Nov 28 07:22:27 crc kubenswrapper[4889]: E1128 07:22:27.031980 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d\": container with ID starting with f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d not found: ID does not exist" containerID="f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.032021 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d"} err="failed to get container status \"f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d\": rpc error: code = NotFound desc = could not find container \"f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d\": container with ID starting with f73605ad45b75ad87d1827b51a9a76d0e6f091f6cb8c8c6581297b04c34e6e0d not found: ID does not exist" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.032045 4889 scope.go:117] "RemoveContainer" containerID="c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477" Nov 28 07:22:27 crc kubenswrapper[4889]: E1128 07:22:27.032349 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477\": container with ID starting with c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477 not found: ID does not exist" containerID="c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.032382 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477"} err="failed to get container status \"c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477\": rpc error: code = NotFound desc = could not find container \"c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477\": container with ID starting with c7785a185fc5d9e1a04357a8f270965c89ced7f14803c39580987613d4550477 not found: ID does not exist" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.032399 4889 scope.go:117] "RemoveContainer" containerID="8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0" Nov 28 07:22:27 crc kubenswrapper[4889]: E1128 07:22:27.032634 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0\": container with ID starting with 8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0 not found: ID does not exist" containerID="8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.032663 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0"} err="failed to get container status \"8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0\": rpc error: code = NotFound desc = could not find container \"8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0\": container with ID starting with 8fb4af711a27797f17ea708548c741c31e66a76bd6ed606533f78ffd950838b0 not found: ID does not exist" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.100478 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-catalog-content\") pod \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.100563 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-utilities\") pod \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.100787 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8wnl\" (UniqueName: \"kubernetes.io/projected/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-kube-api-access-r8wnl\") pod \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\" (UID: \"f152c7fe-61f9-4a40-a1c1-6ac5daf5250c\") " Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.101818 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-utilities" (OuterVolumeSpecName: "utilities") pod "f152c7fe-61f9-4a40-a1c1-6ac5daf5250c" (UID: "f152c7fe-61f9-4a40-a1c1-6ac5daf5250c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.111430 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-kube-api-access-r8wnl" (OuterVolumeSpecName: "kube-api-access-r8wnl") pod "f152c7fe-61f9-4a40-a1c1-6ac5daf5250c" (UID: "f152c7fe-61f9-4a40-a1c1-6ac5daf5250c"). InnerVolumeSpecName "kube-api-access-r8wnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.121898 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f152c7fe-61f9-4a40-a1c1-6ac5daf5250c" (UID: "f152c7fe-61f9-4a40-a1c1-6ac5daf5250c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.202397 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8wnl\" (UniqueName: \"kubernetes.io/projected/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-kube-api-access-r8wnl\") on node \"crc\" DevicePath \"\"" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.202443 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.202456 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.295384 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlsnv"] Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.298742 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlsnv"] Nov 28 07:22:27 crc kubenswrapper[4889]: I1128 07:22:27.339587 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f152c7fe-61f9-4a40-a1c1-6ac5daf5250c" path="/var/lib/kubelet/pods/f152c7fe-61f9-4a40-a1c1-6ac5daf5250c/volumes" Nov 28 07:23:28 crc kubenswrapper[4889]: I1128 07:23:28.782798 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:23:28 crc kubenswrapper[4889]: I1128 07:23:28.783445 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:23:58 crc kubenswrapper[4889]: I1128 07:23:58.783089 4889 patch_prober.go:28] interesting pod/machine-config-daemon-kwbr9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:23:58 crc kubenswrapper[4889]: I1128 07:23:58.783756 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwbr9" podUID="6a6707da-48a9-4e38-a1b2-df82148f0cd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"